By Varsha Bansal  to The Guardian : I n  the spring of 2024, when Rachael Sawyer, a technical writer from Texas,  received a LinkedIn message from a recruiter hiring for a vague title  of writing analyst, she assumed it would be similar to her previous gigs  of content creation. On her first day of work a week later, however,  her expectations went bust. Instead of writing words herself, Sawyer’s  job was to rate and moderate the content created by artificial  intelligence. The job initially involved a mix  of parsing through meeting notes and chats summarized by Google’s  Gemini, and, in some cases, reviewing short films made by the AI. On  occasion, she was asked to deal with extreme content, flagging violent  and sexually explicit material generated by Gemini for removal, mostly  text. Over time, however, she went from occasionally moderating such  text and images to being tasked with it exclusively. “I  was shocked that my job involved working with such distressing  content,” ...