Last week, knowledge annotation staff all over the world woke as much as news studies claiming that ChatGPT can label textual content more accurately than the human annotation staff on the crowdsourcing platform Amazon Mechanical Turk, AKA Mturk. Today, workers organizing for better working conditions with Turkopticon respond to these claims as individuals who do the actual information labeling work. What do you get if you run a quick experiment using the brand new software program tool a whole business is speaking about? Last week, researchers Fabrizio Gilardi, Meysam Alizadeh, and Maël Kubli put a draft examine on arXiv claiming that Chat GPT annotates knowledge better than people for certain tasks. The press jumped on this, suggesting that ChatGPT can change employees to practice AI. Data annotation employees like us began discussing the research on boards and in chats. We rapidly realized its claims - and the press around them - had been missing loads.