TikTok took down more than 580,000 videos in Kenya during the third quarter of 2025 - between July and September- for violating its community guidelines, the social media company has reported, positioning the removals as largely proactive enforcement at scale.
The figures, released in TikTok’s Q3 2025 Community Guidelines Enforcement Report, show the platform’s efforts to proactively protect users and maintain a safe digital environment.
According to the report, an overwhelming 99.7 per cent of the removed videos were taken down proactively, before users could report them, while 94.6 per cent were removed within 24 hours of being posted.
The move underscores the platform’s heavy reliance on algorithmic policing. Furthermore, 94.6% of these videos were removed within 24 hours of upload. This “zero-second” approach aims to minimise the viral spread of harmful content, effectively neutralising it before it gains traction. While high proactive removal rates suggest safety efficiency, they also indicate the immense scale of content moderation that is now fully automated. Globally, 91% of all violative content is removed via AI, raising important questions about the balance between algorithmic speed and the potential for false positives in complex cultural contexts like Kenya.
In addition to video removals, TikTok also interrupted roughly 90,000 live sessions in Kenya during the three months, representing about 1 per cent of all LIVE streams, for failing to adhere to content standards.
“TikTok has taken steps to identify and remove content that violates community guidelines, ensuring a positive experience for its global community,” the platform said.
Globally, TikTok removed over 204 million videos in the period under review, approximately 0.7 per cent of all content uploaded to the platform. Of the global removals, 99.3 per cent were proactively taken down before reports from users, with 94.8 per cent removed within 24 hours.
“This is one of the highest rates ever recorded by TikTok for rapid content removal. Through our continued investment in AI moderation technologies, a record 91 per cent of this violative content is now removed via automated technologies, ensuring consistency and speed,” TikTok said.
Further, the platform reported that it removed more than 118 million fake accounts in the period under review as part of efforts to protect the integrity of the platform.
The report also revealed that over 22 million accounts suspected to belong to users under the age of 13 were taken down, highlighting the company’s commitment to enforcing age restrictions.
“By integrating advanced automated moderation technologies with the expertise of thousands of trust and safety professionals, TikTok ensures swift and consistent enforcement of content that violates its Community Guidelines,” said TikTok.
“This approach is vital in ensuring that we provide a safe platform for our community, as we uphold our policies against harmful content, including misinformation, hate speech, and other violations.”
The disclosure lands amid renewed Kenyan outrage over alleged covert recording and reposting of women on social platforms, sharpening scrutiny on whether detection and escalation systems can keep up with newer “low-visibility” abuse patterns (including speculation about smart glasses). The core issue is consent: interaction in public does not automatically extend to filming and distribution.
For platforms and advertisers, the signal is governance, not optics. Faster takedowns hurt fewer people down the line, but prevention and deterrence are more important. For example, there should be clearer policy triggers for "hidden recording," tighter limits on repeat offenders, and stronger local reporting pathways, especially because regulators and civil society are pushing for tougher online-safety responsibility.
Beyond Enforcement
TikTok is seemingly aware that removal metrics alone do not constitute a “safe” environment. The report highlights features introduced late last year (November 2025), including a new “Time and Well-being” space. This initiative included “Well-being Missions” – gamified tasks designed to help teens build healthier digital habits and use the platform with greater intention.
The Q3 2025 report signals a mature phase in TikTok’s moderation strategy where speed is the priority.
Algorithmic Dominance: The 99.7% proactive rate confirms that human moderators are likely focusing only on the most complex, edge-case appeals, while AI handles the bulk of the “clean-up”.
Transparency: By releasing these figures today, nearly five months after the quarter ended, TikTok aims to maintain transparency, though the lag time in reporting remains a standard industry practice.
The consistency between Kenyan and global metrics is notable. The global proactive removal rate stood at 99.3%, almost mirroring Kenya’s 99.7%, suggesting a uniform application of AI models across different regions.







