In January 2026, Universal Music Group chairman Lucian Grainge sent his annual memo to staff. Buried inside a nearly 3,000-word letter about superfan tiers and streaming deals was a phrase that spread across the music industry like wildfire: "AI slop."
Grainge used it to describe what UMG had long predicted — a flood of low-quality AI-generated content overwhelming streaming platforms. "Validating business models that fail to respect artists' work and creativity, and promote the exponential growth of AI slop on streaming platforms, is a grave disservice to artists, songwriters and all of us who work in music," he wrote.
Strong words. But here's the question nobody in the industry seems to want to answer: Who actually decides what's slop?
The Numbers Behind the Flood
The scale of what Grainge was referring to is real. According to data from Deezer — currently the only major streaming platform with AI detection built in — approximately 60,000 fully AI-generated tracks are uploaded to the platform every single day. That represents roughly 39% of all music delivered to Deezer daily (as of early 2026, per the Say No To Suno campaign data).
To put that in perspective: the average album takes months or years to make. AI can generate a complete track in seconds.
And yet, despite this volume, fully AI-generated tracks account for only around 0.5% of total streams on Deezer. Listeners, it turns out, are still gravitating toward music made by human artists — even if they can't always explain why.
That last part matters more than it sounds.
97% of Listeners Can't Tell the Difference
In November 2025, Deezer partnered with research firm Ipsos to run a first-of-its-kind blind listening study across eight countries with 9,000 participants. The task was simple: listen to three tracks — two AI-generated, one human-made — and identify which were artificial.
It's worth being precise about the methodology: the test was weighted toward AI (two out of three tracks were synthetic), which made it statistically harder to score correctly. But the scale of the failure still tells a story. The gap between what our ears can detect and what our brains believe about music has never been wider.
More than half of respondents said the results made them uncomfortable. 80% said AI-generated music should be clearly labeled on streaming platforms.
They want to know. They just can't tell on their own.
The Slop Problem Nobody Is Solving
Here's where it gets uncomfortable for UMG and the major labels.
Grainge is right that AI is flooding platforms with low-quality content. Deezer found that up to 85% of all streams on AI-generated tracks were detected as fraudulent in 2025 — up from 70% the year before. (To be clear: this is detected fraud specifically on AI content, not a claim that all AI music is fake-streamed. For context, streaming fraud across Deezer's full catalog runs at around 8%.)
Breaking Rust's "Walk My Walk" is the most-discussed example of what happens when AI music escapes the slop filter. The track — fully AI-generated, with no credited human performers — hit No. 1 on Billboard's Country Digital Song Sales chart in November 2025. It was the first AI-generated country song to do so.
There's important context: the Country Digital Song Sales chart is a low-volume ranking where a few thousand purchases can determine the top spot, and TIME Magazine questioned whether the result was artificially inflated for publicity purposes. But the story itself — and the media frenzy it generated — proved something important: the industry has no agreed-upon system for judging quality. Just origin.
UMG can protect its artists from having their royalties diluted by AI content. That's real progress. But "not made by AI" is not the same as "good." And "made by AI" is not the same as "slop."
Detection Isn't Curation
Deezer deserves credit for building AI detection. They tag AI content, remove it from algorithmic recommendations, and filter fraudulent streams. They know what's synthetic. What they don't — and can't — tell you is whether it's worth your time.
Spotify has removed tens of millions of "spammy tracks" from its platform. Apple Music hasn't announced AI detection tools at all. None of these platforms have a quality layer. They have an origin layer.
There are brilliant AI-assisted tracks being buried under a wave of obvious filler, and there is plenty of lazy human-made music clogging the same playlists.
The labels can protect royalty pools. The DSPs can filter fraud. But nobody has built the system that answers the actual question listeners have when they hit play: Is this worth listening to?
Blind Rating Is a Different Answer
That's the problem VoteMyAI was built to solve.
Not: is this AI or human? Not: who made it, and how many followers do they have?
Just: is it good?
On VoteMyAI, tracks are rated blind. No artist name. No follower count. No label. No origin story. Just the music, and a rating from 1 to 5. If a track made with Suno is genuinely great, it ranks. If a track from a signed artist is generic, it doesn't.
The leaderboard isn't a royalty pool or an algorithm. It's a community answer to the question the industry keeps avoiding.
Grainge is right that AI slop is a problem. But the solution isn't just better fraud detection — it's better quality signal. And right now, that signal doesn't exist anywhere except in the ears of actual listeners.
The labels know what AI music is. They don't know if it's good. Neither does anyone else — yet.
Browse the Top-Rated AI Tracks →Read More
Made Something With AI Music?
Submit your tracks to VoteMyAI and let the community rate them. Suno, Udio, ElevenLabs — all welcome.
Submit Your Track →