“Just get the numbers done” and don’t worry about what you’re “putting out there.” This was the directive given to an AI trainer working on a major tech company’s AI products. It’s a sentiment that echoes throughout the industry, revealing how the relentless focus on productivity is systematically undermining the quality and safety of artificial intelligence from within.
AI raters, the human checkers of the digital world, are caught between their instructions and their conscience. They are tasked with ensuring the AI’s outputs are factual and safe, yet they are given impossibly short deadlines that make true verification impossible. When one worker tried to take her time to get things right, especially on sensitive topics like health, she was reprimanded for low productivity.
This pressure leads to a work culture where corners are constantly being cut. Raters are forced to approve or edit content on subjects they have no expertise in, from child development to bladder cancer treatments. The fear of being flagged for taking too long outweighs the responsibility of ensuring accuracy, a dangerous trade-off when the AI’s answers could influence real-world decisions.
This internal conflict has created a workforce that is deeply skeptical of the technology’s capabilities. They are the ones who see the raw, unfiltered output and know how flimsy the quality control process can be. Their experience shows that behind the confident facade of an AI-generated answer is a system where speed often triumphs over substance, and productivity trumps responsibility.
“Just Get the Numbers Done”: How AI Quality is Undermined from Within
11