TikTok Expands AI Use, Raising Concerns Over Content Moderator Jobs

TikTok is increasingly relying on artificial intelligence to monitor and identify harmful content on its platform, a move that is set to impact human content moderators, according to The Times.
An internal email sent to content moderation and quality assurance teams at TikTok’s London office confirmed the suspension of these roles. The company’s parent, ByteDance, also announced plans to cut hundreds of jobs in the UK and Southeast Asia.
The platform justified the decision by pointing to advancements in AI and large language models, and The Times reported that TikTok may also rely on third-party firms to oversee content quality.
Despite these cuts, TikTok stated that the move would not affect its efforts to increase staffing in the U.S. or compromise user safety, noting that security-related positions will remain in the UK.
AI and Content Moderation
A TikTok spokesperson explained that the AI initiative builds on last year’s efforts to strengthen platform security and content oversight. The technology allows the platform to review and remove content before it is published and enables moderation of large volumes of content in a short time.
Expert Warnings
Experts have raised concerns about replacing human moderators with AI. John Chadfield from the Communications Workers Union noted that TikTok employees had long warned about shifting to AI moderation, emphasizing that current AI technology is not mature enough to reliably monitor content, putting millions of UK users at potential risk.
Approximately 300 staff members work in content management and moderation at TikTok’s London office, earning between $35,000 and $43,000 annually.