
Approximately one out of every five Shorts recommended by YouTube to new users is considered low-quality, mass-produced AI-generated content, often referred to as “AI slop.” CEO Neal Mohan himself used this term in his annual letter in January 2026, vowing to enhance YouTube’s systems for detecting spam and clickbait to address this issue.
A study conducted by Kapwing on 15,000 popular channels revealed that 278 of them exclusively produced content classified as AI slop. These channels had collectively garnered 63 billion views, attracted 221 million subscribers, and generated an estimated $117 million in annual advertising revenue by October 2025.
Although the threat posed by AI slop is genuine, it is not evenly spread, and data indicates which content formats are worth investing in. Search Engine Journal has been monitoring YouTube’s responses to AI-generated content since the platform mandated disclosure of AI usage. This article consolidates insights from data analysis, YouTube’s actions, and research on trust to provide guidance on organic video strategy going forward.
In early 2025, the issue shifted from being a mere curiosity to a systemic problem. A Guardian analysis of Playboard data confirmed that nearly 10% of YouTube’s fastest-growing channels worldwide focused solely on publishing AI-generated content, including bizarre themes like zombie football stars and babies in space scenarios.
Among the identified AI slop channels was Bandar Apna Dost from India, earning approximately $4.25 million annually from 2.4 billion views with its realistic monkey videos. Another channel, Pouty Frenchie based in Singapore, rakes in nearly $4 million per year featuring an AI-animated French bulldog.
The distribution of AI slop varies between YouTube’s two main formats. According to Kapwing’s research on content shown to new accounts, out of the initial 500 Shorts presented to a fresh account, 104 were pure AI slop (21%) and another 165 fell under the category of “brainrot” (33%), encompassing various low-quality engagement-focused content types.
Shorts operates as a feed where videos automatically play without requiring user interaction. The algorithm prioritizes immediate retention metrics such as viewer engagement within the first few seconds. AI tools excel at creating visually captivating hooks that grab attention initially but may lack substance beyond those initial moments.
In contrast, long-form video necessitates an active click based on thumbnail and title appeal. Viewers investing more time exercise discernment, leading to stricter algorithmic evaluation criteria for retention rates.
The economic models further distinguish between Shorts and long-form content revenue distribution mechanisms. While Shorts revenue is based on total views and rewards volume, long-form content revenue is tied to individual video ad placements with higher CPMs and stringent brand safety measures.
Channels mixing both short and long-form formats risk having their audience categorized based on their interaction with Shorts content. The prevalence of AI slop extends beyond genres like children’s entertainment and fake trailers into areas relevant to SEO practitioners and digital marketers.
Categories such as business explainers and finance tutorials are frequently targeted by AI slop flooding efforts. Educational content production is increasingly industrialized, while news commentary segments are inundated with event-centric content churned out by AI algorithms.
YouTube took visible policy actions in July 2025 when it revised its monetization guideline for “repetitious content” to target “inauthentic content.” Enforcement measures have been primarily reactive against this menace.
While implementing mandatory disclosure requirements for AI usage and introducing likeness detection tools for creators’ protection against deepfakes, YouTube continued rolling out its own AI creation tools at a rapid pace.
The convergence point where these dual tracks intersect lies in establishing provenance for AI-assisted production methods. The distinction between acceptable assistance through AI tools versus producing inauthentic AI-generated content is expected to become more stringent over time.
Industry experts express doubts about enforcement keeping pace with the escalating problem of AI slop proliferation. Viewer reactions towards AI content carry more weight for organic strategy planning than growth statistics alone.
YouTube’s algorithm appears attuned to this trust dynamic among viewership preferences. Despite no public data correlating specific CPM or RPM declines with human creators due to AI slop emergence across niches on the platform, this factor should be considered when strategizing future content plans.
Long-form content optimized for search engines faces comparatively less pressure from the influx of AI-generated material. On-camera presence remains a challenging aspect for AI replication efforts while community engagement signals are challenging to fabricate at scale.
Shorts are better suited as a discovery tool rather than a primary platform for creators. Many professionals already utilize AI in their production processes, emphasizing disclosure of human involvement as a distinctive feature amidst increasing automation trends.
An industry forecast suggests that up to 30% of YouTube viewing could involve AI-generated content by the end of the decade. The platform benefits irrespective of whether viewers engage with human-created or AI-driven material.
Monitoring YouTube’s quality standards has revealed a consistent pattern over various cycles β building trust remains the most sustainable advantage amid evolving content landscapes.
