Streaming Platforms Diverge on AI Music as Sony Targets 135,000 Deepfake Tracks for Removal
Sony Music's disclosure that it has flagged more than 135,000 AI-generated deepfake tracks for removal exposes a widening split among streaming platforms over how to detect, label, and restrict synthetic music, with approaches ranging from voluntary self-reporting to automated detection.
Sony Music revealed on March 18 that it has targeted more than 135,000 AI-generated deepfake tracks for removal from streaming platforms, a figure disclosed at the launch of the International Federation of the Phonographic Industry’s annual Global Music Report. The announcement, made by Dennis Kooker, president of Sony’s global digital business, puts a concrete number on an escalating crisis that is forcing every major streaming service to define its own approach to synthetic music — and the resulting patchwork of policies is creating confusion for artists, labels, and listeners alike.
The scale of Sony’s disclosure is striking. Of the 135,000 tracks identified, roughly 60,000 were flagged in the past year alone, suggesting the rate of deepfake uploads is accelerating as generative AI tools become cheaper and more accessible. The targeted tracks impersonated artists including Beyonce, Queen, and Harry Styles, with Bad Bunny, Miley Cyrus, and Mark Ronson also reportedly affected. Kooker described the deepfakes as causing “direct commercial harm to legitimate recording artists” and noted they are strategically timed to coincide with album release campaigns, “benefiting from the demand the artist has created.”
Sony acknowledged that the 135,000 figure likely represents only a fraction of the total volume of AI-generated impersonation tracks circulating on streaming services.
Three Models, Three Philosophies
The streaming industry’s response to AI-generated music has fractured into at least three distinct approaches, each reflecting a different philosophy about who bears responsibility for identifying synthetic content.
Apple Music launched its Transparency Tags framework on March 4, 2026, introducing metadata fields that allow labels and distributors to flag AI involvement across four creative elements: artwork, track, composition, and music video. Multiple tags can be applied simultaneously, and Apple has stated that “proper tagging of content is the first step in giving the music industry the data and tools” needed for policy development. The system’s critical limitation, however, is that tagging is currently voluntary and self-declared. Apple does not verify or enforce compliance, and if no tag is applied, the platform assumes no AI involvement. Future deliveries are expected to require mandatory tagging, but no timeline has been set.
Deezer has taken the opposite approach, building proprietary detection technology that identifies AI-generated music at the platform level without relying on upstream disclosure. The French streaming service reported processing more than 60,000 fully AI-generated tracks per day in January 2026 — six times the rate from a year earlier. Its detection system has flagged over 13.4 million AI tracks since 2025, and the company estimates that synthetic content now accounts for roughly 39 percent of its uploads. Deezer CEO Alexis Lanternier stated that “the majority of AI-music is uploaded to Deezer with the purpose of committing fraud.” Tracks identified as fully AI-generated are automatically excluded from algorithmic recommendations and editorial playlists, and fraudulent AI streams are filtered from royalty calculations. In January 2026, Deezer announced it would make its detection technology available to rival platforms.
Spotify occupies a middle ground, adopting the DDEX industry standard so AI-assisted tracks can be labeled in credits while also deploying a spam filter targeting mass-produced or fraudulent content. The platform explicitly bans unauthorized AI voice clones and vocal impersonations, stating such content “will be removed.” Unlike Deezer, Spotify has not disclosed figures on how many AI-generated tracks it has detected, nor has it published details about its detection methodology.
The Self-Reporting Problem
The divergence between Apple’s voluntary tagging system and Deezer’s automated detection highlights a fundamental tension in the industry’s approach. Self-reporting requires the uploading party — often the very entity that stands to profit from AI-generated content — to disclose the use of AI. Deezer’s data suggests this assumption is unreliable: the company reported that 85 percent of AI-music streams on its platform in 2025 were fraudulent, meaning the uploaders had no incentive to flag the content as synthetic.
Apple’s framework also faces a definitional challenge. The company has deferred to content providers to determine what qualifies as “AI-generated,” leaving open questions about tracks that use AI for mixing, mastering, or vocal processing alongside human performance. In an era when AI tools are increasingly embedded in standard music production workflows, the boundary between AI-assisted and AI-generated is far from clear.
Bandcamp has taken the most restrictive stance of any platform, explicitly banning music “produced entirely or mainly by AI” and reserving the right to remove suspected generative content. Qobuz, the French high-fidelity streaming service, has released an “AI Charter” and deployed its own proprietary detection tool while committing to 100 percent human-curated recommendations. Tidal and SoundCloud have focused on a different dimension of the problem, committing not to use uploaded catalogs to train AI models without explicit opt-in consent.
Legal Battles and Regulatory Pressure
The platform-level policy divergence is unfolding against a backdrop of active litigation and shifting regulation. Sony Music, along with Universal Music Group and Warner Music Group, sued AI music generators Suno and Udio in 2024 for mass copyright infringement. Oral arguments on Suno’s motion to dismiss were scheduled for March 20, 2026, while Germany’s performing rights organization GEMA has a ruling against Suno scheduled for June 12, 2026.
Regulatory developments are adding further pressure. The UK government reversed course on March 18, abandoning a proposed copyright exception that would have allowed AI companies to train on copyrighted music without explicit permission. In the United States, the White House issued a legislative framework on March 20 that acknowledged intellectual property concerns around AI training but stopped short of taking a definitive position, stating it supports allowing courts to resolve the question.
The Revenue Context
The stakes are substantial. The IFPI’s Global Music Report, released alongside Sony’s deepfake disclosure, showed global recorded music revenues grew 6.4 percent year-over-year in 2025 to $31.7 billion. Paid streaming subscriptions drove that growth, with revenue from the format rising 8.8 percent and accounting for 52.4 percent of global revenues. The number of paid streaming subscribers reached 837 million.
For an industry generating nearly $17 billion from streaming alone, the integrity of that revenue depends on accurate attribution and legitimate content. Every deepfake track that accumulates streams diverts royalties from the impersonated artist to the fraudster who uploaded it. Spotify pays between $0.003 and $0.005 per stream, meaning a deepfake that accumulates a million plays could redirect $3,000 to $5,000 from its intended recipient.
The question facing the industry is whether the current patchwork of platform-specific approaches is sustainable. Deezer has made the case for a unified, detection-first model by offering its technology to competitors, but adoption has been slow. Apple’s metadata-driven approach may eventually become an industry standard — particularly if mandatory tagging replaces the current voluntary system — but enforcement remains an open question. Until the industry converges on a common framework, artists and labels face the prospect of navigating a different set of rules on every platform, while the volume of AI-generated content continues to grow.