Newest GodTvRadio Episode

YouTube Secretly Used AI To Edit People's Videos

YouTube's Secret AI Video Editing Experiment: What Happened?

Recent reports have confirmed that YouTube has been using AI (machine learning algorithms) to automatically enhance and alter certain videos, specifically YouTube Shorts, without creators' knowledge or consent. This secret experiment, which began in recent months, has sparked significant creator backlash and raised concerns about authenticity and trust in digital content. Here's what you need to know.

How the AI Edits Work

YouTube applied AI-driven processing during the upload and compression phase to improve video quality. The enhancements include:

  • Unblurring and denoising: Reducing noise and blur for clearer footage.
  • Sharpening and smoothing: Enhancing details like skin textures or wrinkles in clothing, sometimes creating an unnatural "AI-generated" look (e.g., faces resembling oil paintings or warped features).
  • Clarity improvements: Similar to smartphone auto-enhancements, applied post-upload without creator control.

These changes use "traditional machine learning," not generative AI, according to YouTube. However, there's no opt-out option, and the edits occur on YouTube's servers. Downloading videos via tools like yt-dlp shows the original, unaltered footage.

How It Was Discovered

The issue surfaced in June 2025 when creators and users noticed anomalies:

  • Creator Complaints: Music YouTuber Rhett Shull (over 500,000 views) compared his Shorts on YouTube vs. Instagram, spotting "smeary" over-sharpening and an "oil painting effect" on his face, raising fears of a deepfake-like appearance. Rick Beato (5M+ subscribers) noticed odd hair and makeup-like effects. Others reported ruined aesthetics, like grainy VHS-style videos being smoothed out.
  • Social Media Buzz: Discussions on Reddit (e.g., r/DataHoarder, r/technology) and X highlighted distorted body parts and sparked viral debates by August 2025, with some framing it as potential censorship.

YouTube's Response

In August 2025, YouTube's head of editorial, Rene Ritchie, addressed the issue on X:

"We're running an experiment on select YouTube Shorts that uses traditional machine learning to unblur, denoise, and improve clarity... similar to what a modern smartphone does."

YouTube denied using generative AI but didn't confirm if creators could disable it. They claimed the goal was to improve video quality, but critics, including disinformation expert Samuel Woolley, called it misleading, noting machine learning is a form of AI. Woolley warned in an NBC News report: "This reveals how AI is increasingly defining our realities."

Broader Implications and Reactions

  • Trust and Authenticity: Creators worry viewers will suspect AI manipulation, eroding trust, especially amid rising deepfake concerns. X user Ari Cohn, a First Amendment lawyer, stated: "The issue is changing content without creators’ permission or knowledge."
  • Ethical and Legal Questions: Experts compare this to past scandals like Samsung’s AI Moon photos or Netflix’s AI-altered 1980s shows, highlighting risks of bending reality. YouTube lacks watermarks for edited videos, unlike Google’s Pixel 10.
  • Community Backlash: X and Reddit posts range from anger ("deceptive") to boycott calls. Some see it as a step toward censorship, while others downplay it as minor compression tweaks.
  • Not Isolated: YouTube uses video data to train AI models like Gemini, fueling further creator unease.

As of August 28, 2025, the experiment continues on select Shorts, with YouTube promising to consider feedback. Creators should monitor their videos or use download tools to verify originals. Viewers, be skeptical of polished content. Stay tuned for updates on potential opt-out features.

Sources: NBC News, X posts, Reddit discussions, YouTube statements.