Chinese short-video-sharing application TikTok is increasingly relying on Artificial Intelligence (AI) moderation tools to police content in Kenya, removing more than 580,000 videos in the three months to September 2025.
The company revealed that increased use of AI moderation tools led to the detection of over 90 percent of violative content even without human content moderators. TikTok said 99.7 percent of the violative videos in Kenya were removed before being reported, compared with 92.9 percent in the three months to June.
About 94.6 percent of the content was taken down within 24 hours, slightly below the 96.3 percent recorded in the previous quarter. “Through our continued investment in AI moderation technologies, a record 91 percent of this violative content is now removed via automated technologies,” TikTok said in its third-quarter Community Guidelines Enforcement Report.
Meta's Facebook, Instagram and WhatsApp logos.
Photo credit: File | AFP
Social media companies, including Meta-owned Facebook and Instagram, are turning to AI-powered content moderation to detect, flag, and remove harmful content, such as graphic violence and hate speech.
These systems utilise machine learning and natural language processing to handle vast volumes of data, reducing the burden on human teams. While AI accelerates the process, human moderators are mostly still used for final, nuanced, or borderline decisions.
The 580,000 videos TikTok deleted from Kenya in the quarter to September 2025 were a decline from the 592,037 videos taken down in the preceding quarter.
The July–September period also saw roughly 90,000 Live sessions interrupted for breaching platform rules, equivalent to one percent of all live streams in Kenya. TikTok has previously been criticised for the proliferation of explicit content on its livestreaming feature.
TikTok’s Community Guidelines ban content that promotes violence, criminal activity, hate speech, harassment, or abuse. Users are not allowed to post material that encourages violent acts, threatens individuals or groups, or supports hate organisations, extremist movements, or criminal networks.
The platform also prohibits content linked to sexual exploitation, human trafficking, or abuse of children and adults. Harassment, bullying, and doxing are not permitted, and although political discussion is allowed, posts that cause or risk serious harm are removed.
Material showing suicide, self-harm, dangerous stunts, or eating disorders is restricted to protect users’ mental health. TikTok also bars explicit sexual content, graphic violence, and animal cruelty, and removes misinformation, particularly around elections, public health, and civic processes, while requiring clear disclosure of AI-generated or heavily edited media.
Fraudulent schemes
Users are further banned from sharing plagiarised or unoriginal posts, manipulating engagement through fake activity, or promoting scams and other fraudulent schemes. The company did not detail which of the guidelines Kenyan users most violated.
Globally, TikTok removed 204.5 million videos in the quarter to September 2025, about 0.7 percent of total uploads, with 99.3 percent taken down proactively. More than 118 million fake accounts and 22 million others suspected to be held by people under 13 years of age were also removed.
TikTok launched globally in 2018. The platform’s popularity grew from 2020, during the Covid-19 pandemic, driven by video trends and viral challenges among popular content creators.
Last year, the site was Kenya’s third-most-visited internet platform in 2025 behind Google and Facebook, a slip from the top position TikTok held in 2024, according to data from the web infrastructure provider and traffic monitor Cloudflare.