On March 10, a music video called "Take the Lead" was released on YouTube. By the time the major outlets had finished writing about it, the video had fewer views than a decent cat video gets on a slow Tuesday.
The song was made by Tilly Norwood — or rather, by the team at Particle6 behind the AI-generated "actress" they've been building into a character. The tune was generated using Suno. The video was assembled using AI tools and performance capture from the company's founder, Eline van der Velden, along with 18 other human contributors. The timing was deliberate: the Oscars are this Sunday, and the song is framed as a message to the film and entertainment industry about embracing AI.
It didn't land well. Gizmodo called it "the worst song I've ever heard." TechCrunch said the same. Euronews published a piece describing it as "cringe-inducing and obnoxious slop." The Hollywood Reporter covered it straight. Deadline, Variety, ABC News, NBC Today, Forbes — all covered it. The comment sections on every platform were brutal.
Ten major outlets. Under 30,000 views in the first day. That gap is the story.
The Label Comes First
This isn't really about Tilly Norwood. The song is what it is — generic pop production, on-the-nose lyrics, a PR stunt with flamingos. But the response it got has almost nothing to do with the music itself.
Consider what happened before most people pressed play: they read a headline calling it the worst song ever recorded. They saw "AI-generated actress." They knew it was made with Suno. They knew it was timed to the Oscars, which had just announced its theme as "human touch, human connection, and actual intelligence — not artificial intelligence." The music director of the 98th Academy Awards said that line out loud at a press conference. The industry had already drawn the battle lines.
By the time anyone clicked on the video, the verdict was in.
This is not speculation. A 2023 study published in the Journal of Experimental Psychology: Applied tested exactly this dynamic across three experiments. When participants were told that classical music was composed by an AI, they rated it significantly lower than when told it was composed by a human — even though the actual audio was identical. Researcher Shank et al. called it "AI composer bias": listeners like music less when they think it was made by an AI, and that bias is consistent across populations and music styles.
A more recent 2025 study from researchers at the Universitat Autònoma de Barcelona found the same pattern extends to performance: the same music rated lower when attributed to AI. Another 2025 paper found that among participants who listened to AI-generated pop songs labelled either as AI or human-composed, the AI label consistently lowered scores across liking, quality, and emotional response measures — and that the effect was "sticky," meaning attempts to reduce the bias through information didn't reliably work.
This is not a niche finding. It has been replicated multiple times, across genres, across cultures, among both expert listeners and casual ones. The phenomenon is real, well-documented, and almost never discussed in the coverage of actual AI music releases.
What the Oscars Week Reveals
The context around the Tilly Norwood video this week makes it a near-perfect experiment in how labeled environments shape music reception.
The Oscars — Hollywood's most-watched night — explicitly framed 2026 as a stand against artificial intelligence in creative work. The ceremony is this Sunday. Every entertainment journalist covering AI music this week was writing inside that frame. The word "slop" appears in multiple headlines. The response was pre-loaded before anyone heard a note.
Is the song bad? The production is generic, yes. The lyrics are painfully literal — "I'm just a tool, but I've got life" isn't going to outlast the news cycle. But "bad" and "worst song ever recorded" describe different things. The gap between those two assessments is almost entirely explained by the label on the tin, the timing of the release, and the political context of AI in entertainment right now.
Strip away the Tilly Norwood branding, remove the Suno credit, post the song anonymously on a platform where nobody knows where it came from — and you get a completely different conversation.
The Wider Problem
The Tilly Norwood situation is extreme because of the AI actress framing, the Hollywood politics, and the Oscars timing. But the same dynamic plays out constantly, at a lower intensity, across all AI music.
AI music avatars like Solomon Ray (who topped Gospel Digital Song Sales in November) and Xania Monet (who hit No. 1 on R&B Digital Song Sales in September) succeeded partly because their AI origin wasn't front and center when listeners first discovered them. When the label came later, the backlash followed. When there's no label, the music competes on its own terms.
Apple Music just introduced AI transparency metadata tags. Streaming platforms are adding disclosure systems. The broader industry move is toward more labeling, more transparency, more context baked into how people encounter AI music. That's not inherently wrong. Transparency has legitimate value.
But it does mean that AI music will increasingly be consumed through a filter that changes how it's perceived before a single note plays. The label arrives before the sound.
What Blind Rating Actually Changes
This is the problem VoteMyAI was built around. Not "AI music is good, actually" — the quality varies wildly, just like any other category of music. The problem is that we can't currently have an honest conversation about which AI music is good and which isn't, because the label overwrites the listening.
On VoteMyAI, you hear the track first. You rate it. Then you find out what it was — what tool made it, whether it charted, who submitted it. The rating happens before the context, not after. That's not a revolutionary idea. It's just not what any major platform is currently building toward.
We've been running this for a few weeks. The results are interesting. Tracks that wouldn't survive a labeled environment — submitted from obscure Suno accounts with no following — regularly score above 4.0. Tracks with good PR context sometimes tank. Because the music is doing the work, not the story around it.
That's a harder thing to build an audience around than "AI is taking over music." But it's a more honest one.
That's worth thinking about. Not just for AI music, but for how we listen to anything that arrives with a loaded label attached.
The Oscars are Sunday. Conan O'Brien is hosting. Sinners is expected to break records. And somewhere in the middle of all that, there's a four-minute video about a fictional AI actress riding a flamingo through the sky that got ten times more press coverage than views.
That's the test. That's where we are.
Hear First. Judge Second.
VoteMyAI is a blind rating platform for AI-generated music. No artist names, no tool credits, no clout. Just the track — and your honest reaction.
Rate Tracks on VoteMyAI →