APPLE JUST TOLD THE WORLD TO LABEL AI MUSIC. WE'VE BEEN DOING THE OPPOSITE ON PURPOSE.

On March 4, Apple Music announced "Transparency Tags," a new metadata system that is asking for labels and distributors to disclose when AI was used in a song's artwork, audio, composition, lyrics, or music video. Apple's exact words: "Proper tagging of content is the first step in giving the music industry the data and tools needed to develop thoughtful policies around AI."

That sounds reasonable. Measured. Responsible, even.

It's also built on an assumption that deserves a lot more scrutiny than it's getting. The assumption is simple: knowing something was made with AI changes how you should evaluate it. We built VoteMyAI on the exact opposite belief.

THE LABEL CHANGES THE LISTENING

There's a well-documented phenomenon in psychology called anchoring bias. Give someone a piece of information before they make a judgment, and that information warps the judgment itself. It doesn't matter if the information is relevant. It doesn't matter if the person is aware of the bias. The anchor pulls.

Music is not immune to this. Studies have shown that listeners rate the same piece of music lower when told it was created by AI. Not different music. The exact same audio. Same melody, same production, same emotional arc. The only variable is the label. And the label alone tanks the rating.

That's not transparency. That's priming.

Apple isn't asking listeners to judge a song on its sound. They're asking them to judge it through a filter. A filter that says "this was made differently, so maybe you should feel differently about it." The tag becomes a warning. The warning becomes a verdict. And the music never gets a fair hearing.

WHY APPLE IS DOING THIS

To be fair, Apple is responding to real problems. The fraud numbers are staggering.

Deezer now receives 60,000 fully AI-generated tracks per day. That's up from 10,000 in January 2025. Thirty-nine percent of all music delivered to Deezer daily is AI-generated. And here's the kicker: Deezer reports that 85% of streams on AI-generated tracks are fraudulent. Bots streaming bot music to collect royalties from a pool funded by real listeners.

The fraud problem is real

Deezer went from 10,000 fully AI-generated tracks per day in January 2025 to 30,000 in September, 50,000 in November, and 60,000 today. 39% of all music delivered to Deezer daily is now AI-generated. And 85% of those streams are fraudulent. Platforms absolutely need tools to fight this. The question is whether slapping a label on every track that used AI in any capacity is the right tool, or whether it punishes legitimate creators alongside the grifters.

That fraud problem is worth solving. Urgently. But transparency tags don't solve it. A bot farm uploading 10,000 AI-generated tracks per day is not going to honestly tag them. The people who comply with Apple's tagging system are the ones who were never the problem in the first place: independent creators using Suno or Udio to make real music with real creative intent.

Deezer took a different approach entirely. They built detection systems that flag AI-generated audio automatically, without relying on self-reporting. That's closer to an actual solution. Apple's approach, by contrast, puts the burden on labels and distributors. Apple itself does not detect AI. It only requires disclosure. The tags are optional for now. If content arrives without a tag, Apple simply assumes no AI was involved.

So the honest creators get tagged. The fraudsters don't. And the system mostly functions as a scarlet letter for people playing by the rules.

THE INDUSTRY IS TALKING OUT OF BOTH SIDES

The timing of Apple's announcement is fascinating when you look at what else happened the same week.

Charlie Puth was named Chief Music Officer of Moises, an AI music tool, on the same day Apple dropped its tag system. His statement: AI is not here to replace musicians. He's positioning himself as the bridge between traditional artistry and AI tools. Not anti-AI. Not pro-replacement. Just pragmatic.

Warner Music CEO Robert Kyncl told shareholders this week that AI is music's next growth engine. WMG is the only major label to have licensed Suno so far. They're not fighting AI. They're monetizing it. That's a very different posture than "we need warning labels."

So which is it? Is AI music a threat that needs to be flagged and disclosed? Or is it the next growth engine for the industry's biggest players? The answer depends on who's talking, and whether they've already figured out how to profit from it.

The labels that have AI licensing deals want to sell AI music. The labels that don't want to slow it down until they get their cut. Transparency tags serve the second group beautifully. Tag it, stigmatize it, create friction, buy time to negotiate.

BLIND RATING EXISTS FOR A REASON

VoteMyAI was built on a specific thesis: music should be judged by how it sounds. Not by who made it. Not by what tools were used. Not by how many followers the creator has.

Every track on the platform is rated blind. You hear the song. You rate the song. You don't know if it was made in a bedroom with Suno or in a professional studio with a live band. You don't know if the creator is 16 or 60. You don't know if they're famous or anonymous.

That's the point.

The blind model removes anchoring bias entirely. There's no tag to prime your judgment. There's no "AI" label making you wonder if you should like this less. There's just sound hitting your ears, and your honest reaction to it.

Some of the highest-rated tracks on VoteMyAI were made with AI tools. Some weren't. Users don't know which is which until after they've rated. And consistently, the community rates based on quality, not origin. A great song is a great song. A bad song is a bad song. The tools are irrelevant.

The blind test nobody talks about

When researchers played AI-generated and human-made music to listeners without labels, accuracy at identifying which was which hovered around 50%. Coin flip territory. The music itself doesn't carry a reliable signature. The only way most people "know" a track is AI is because someone told them.

TRANSPARENCY VS. BIAS

Apple's system solves an industry problem. It gives labels, distributors, and rights holders a data layer they've been asking for. It helps with licensing negotiations. It helps with legal compliance. It helps platforms track what percentage of their catalog is AI-assisted.

Those are legitimate business needs. Nobody is arguing otherwise.

But none of those are artistic questions. The artistic question, the one that actually matters to listeners, is simple: is this music good? And transparency tags make that question harder to answer honestly, not easier.

Imagine a world where every painting in a gallery had a tag: "This artist used Photoshop" or "This artist used only oil paints." Would you look at the art differently? Of course you would. That's the problem. The tag interferes with the experience. It inserts a judgment before you've formed your own.

The music industry has always been obsessed with credentials. Who signed you. What studio you recorded in. Who produced the track. These are the same gatekeeping signals that kept talented outsiders out for decades. AI tools cracked those gates open. A poet from Mississippi used Suno to create a Billboard #1 artist. She didn't need a label's permission or a producer's approval. She needed words and a tool.

Transparency tags put a new gate in front of the same door. Different lock, same effect. "We'll let you in, but we're going to mark you."

WHAT APPLE GETS WRONG

Apple's framing treats AI as a binary. Used AI or didn't. But music creation in 2026 doesn't work that way.

Auto-Tune is AI-adjacent technology. Melodyne uses machine learning. Sample libraries are curated by algorithms. Mastering services like LANDR are fully automated. At what point does a tool cross the line from "production software" to "AI that must be disclosed"?

If a songwriter writes every lyric by hand, records their own vocals, but uses an AI mastering service for the final mix, does that track get tagged? If a producer uses AI to generate a drum pattern but plays every other instrument live, is that AI music? The system Apple proposed has four categories: artwork, track audio, composition/lyrics, and music video. But the boundaries between those categories are blurring faster than any tagging system can keep up.

The honest answer is that almost all music made in 2026 involves some form of AI or machine learning somewhere in the chain. Drawing a hard line between "AI" and "not AI" is increasingly meaningless. What matters is whether the music connects. Whether it moves you. Whether it's good.

THE QUESTION NOBODY IS ASKING

Here's what keeps getting lost in the transparency debate. Nobody is asking the most obvious question: does knowing a song was made with AI actually help the listener?

Does it make the song sound better? No. Does it help you find music you like? No. Does it improve your listening experience in any measurable way? No.

The only thing it does is give you a reason to like something less. That's not a feature. That's a bug.

Apple's transparency tags solve problems that matter to lawyers, executives, and rights holders. They do nothing for the person pressing play. And they actively harm creators who happen to use a newer set of tools.

VoteMyAI exists because we think the listener deserves better than that. You deserve to hear music without a filter telling you how to feel about it. You deserve to rate a song based on what it sounds like, not what software was involved. You deserve the experience of discovering something incredible and not having that discovery tainted by a metadata tag.

Apple wants to label music. We want to liberate it. The industry will have to pick a side eventually. We already picked ours.

YOUR MUSIC. RATED BLIND.

Submit your track to VoteMyAI. No names. No clout. Just sound.

Submit Your Track →