Shafaqna English- Thousands of contract workers are quietly responsible for shaping the outputs of Google’s artificial intelligence systems, often under significant psychological strain, as the Guardian wrote.
Among them is Rachael Sawyer, a technical writer from Texas, who was hired in 2024 under the title of “writing analyst.” She soon learned that her role involved rating and moderating content produced by Google’s Gemini AI, including graphic and explicit material. According to Sawyer, the work was never clearly described, no consent forms were signed, and no mental health support was offered despite the distressing nature of the tasks.
These workers, employed through contractors such as Hitachi’s GlobalLogic, review AI-generated text, images, and summaries to ensure that Google’s chatbot Gemini and AI Overviews remain safe and accurate. While their efforts are essential to improving each new version of Google’s models, they are paid far less than the engineers designing the systems and remain largely invisible to the public.
Experts say this workforce is critical to the global AI supply chain but lacks recognition and protection. In a statement, Google emphasized that the raters’ feedback is only one of many factors used to evaluate product performance and does not directly shape the company’s algorithms.
Source: Guardian

