THE MUSIC INDUSTRY IS SPLITTING IN TWO. ONE SIDE HAS CONSENT. THE OTHER HAS 60,000 FAKE TRACKS A DAY.

Harvey Mason Jr., the head of the Recording Academy, recently said he sees AI in every single studio and every single session he walks into. The same week, a Gallup survey found that only 3 percent of Gen Z trusts work that is fully AI-generated. Two realities. Same industry.

The music industry is no longer having a debate about AI. It has split into two separate worlds that happen to share the same tools. One is built on consent, licensing, and transparency. The other is built on fraud, hijacking, and a flood of content nobody asked for. Both are growing at the same time.

THE CONSENT ECONOMY

In mid-April 2026, Hollywood Reporter published a conversation with Suno chief executive Mikey Shulman in which he described a clear shift in mood at the start of 2026. He said he does not meet many producers or songwriters who are not using Suno somewhere in their workflow. The change, according to him, is that people are starting to feel comfortable being open about it.

The songwriter Autumn Rowe, who has written for Jon Batiste, Dua Lipa, and Ava Max, confirmed in the same piece that her peers use Suno for demos and are actually landing cuts on artists. She has started pulling old demos out of her archive and remixing them through Suno to see what they become. This is not an experiment at the edges of the business. These are career songwriters working on the records that will define the next year.

In Nashville, a platform called Soundbreak launched in February 2026. It was founded by Kevin Griffin of Better Than Ezra and lets fans co-write songs with licensed AI models of real artists, including Jaren Johnston of Cadillac Three and Michael Fitzpatrick of Fitz and the Tantrums. The artists worked directly with the engineers who built their models. They get paid through subscription revenue and share ownership of the resulting songs. Better Than Ezra even launched a contest where the winning fan-written track becomes the band's next single.

On the label side, the split is visible in the fine print. Warner Music Group settled with Suno without demanding a walled garden, which means tracks generated in partnership can move across streaming services like any other release. Universal Music Group settled with Udio but insisted on a walled garden with no downloads and no external distribution. Two majors, two answers, both inside the consent economy.

ElevenLabs sits firmly on the consent side as well. The company is now valued at 11 billion dollars, its community has generated 14 million songs, and its music model was trained on licensed data rather than scraped catalogs. Xania Monet, the artist project built by Telisha Jones, writes 90 percent of her own lyrics, uses Suno for production, and signed a 3 million dollar deal with Hallwood Media last year. Apple Music rolled out voluntary AI transparency tags in March 2026 so listeners who want to know can see the disclosure before pressing play.

None of this is underground. This is the Grammy organization, major labels, career songwriters, and publicly traded platforms. Tools like AI Song Maker and Soundverse are making the same workflows available to bedroom producers. The consent economy is real, it is funded, and it is growing.

THE FRAUD MACHINE

The other world runs on the same tech and the opposite intent.

Deezer now receives more than 60,000 fully AI-generated tracks per day, which is 39 percent of everything uploaded to the platform daily. That number was 10,000 a day in January 2025, 20,000 in April, 30,000 in September, 50,000 in November, and 60,000 by January 2026. The curve has no sign of flattening. Of the streams that actually land on those AI-generated tracks, Deezer flagged up to 85 percent as fraudulent in 2025, driven by bot farms inflating plays to drain the royalty pool.

Deezer has now tagged more than 13.4 million AI tracks inside its catalog. It remains the only major streaming service that labels AI-generated music explicitly, which is why its numbers are the only honest numbers available. Everyone else is guessing.

Spotify has a different problem. On April 14, 2026, Digital Music News reported that jazz pianist Jason Moran, the former artistic director for jazz at the Kennedy Center, discovered an EP called For You sitting on his official Spotify profile. It contained indie-pop songs. As Moran put it, there is not even a piano player on the whole record. Danish musicians Carsten Dahl, Thomas Blachman, and Chris Minh Doky have reported the same hijacking. Uploaders are using the official artist profiles of living jazz musicians as distribution channels for AI slop.

Moran asked the question nobody wants to answer. How is John Coltrane or Billie Holiday supposed to verify that a new release on their profile is actually theirs? Spotify launched an opt-in Artist Profile Protection beta in March 2026 that lets living artists approve releases before they appear. Dead artists have no such option. Spotify has also had to remove 75 million spam tracks in the past 12 months. Sony Music has pulled 135,000 deepfake songs from streaming platforms.

The fraud detection firm Beatdapp estimates that 5 to 10 percent of all streams globally may be fake, costing the industry 1 to 2 billion dollars per year in misrouted royalties. In late 2024 a North Carolina man named Michael Smith was charged with running a seven-year scheme that used AI-generated tracks and botted streams to pocket more than 10 million dollars in fraudulent royalties. It was the first criminal AI music fraud case in the United States. It will not be the last.

THE TRUST COLLAPSE

While the industry argues about supply, the audience has already made up its mind.

A Gallup survey conducted between February 24 and March 4, 2026, polled 1,572 Americans aged 14 to 29 across all 50 states. Excitement about AI dropped from 36 percent to 22 percent year over year, a 14 point collapse. Hopefulness fell 9 points, from 27 percent to 18 percent. Anger rose 9 points, from 22 percent to 31 percent. Weekly AI usage stayed almost exactly flat at 51 percent.

The trust numbers are the ones that matter for this story. Gen Z reports 69 percent trust in work that is fully human-made, 28 percent trust in work that is AI-assisted, and 3 percent trust in work that is fully AI-generated. Eighty percent believe AI will make learning harder. Forty-two percent believe it will damage their ability to think critically. Thirty-eight percent believe it will damage their creativity.

The striking finding is that the heaviest AI users are souring fastest. Daily AI users are 18 points less excited and 11 points less hopeful than they were a year ago. Gallup researcher Zach Hrynowski described it plainly. This is not a case of some people adopting AI and loving it while others avoid it. Even the power users are getting more skeptical.

The workplace data matches. A Writer and Workplace Intelligence survey found that 29 percent of all employees are actively sabotaging their own company's AI strategy. Among Gen Z workers, that figure jumps to 44 percent. Sixty percent of executives say they are considering letting go of employees who refuse to use AI. Usage without trust. That is where the audience actually sits.

TWO INDUSTRIES, ONE NAME

Put the numbers next to each other and the split is impossible to miss. Suno now has 2 million subscribers, 300 million dollars in annualized revenue, and a 2.45 billion dollar valuation. The platform is generating around 7 million tracks per day. At the same time, Deezer is flagging 85 percent of its AI music streams as fraud.

Hollywood Reporter is describing an industry where songwriters are quietly using Suno on records that will top the charts. Gallup is describing an audience where anger at AI is the fastest-growing emotion. Both accounts are true. They describe the same technology being used by different people for different reasons.

The open question is whether the two worlds can coexist permanently. If the fraud side keeps growing at the current rate, it may poison the legitimacy of the consent side before transparency tools can catch up. Listeners may stop trying to tell the difference and simply apply the 3 percent number to everything. Transparency tags, Artist Profile Protection, AI labeling, and licensed training data are all real steps in the right direction. It is not obvious they can outrun 60,000 new AI uploads a day.

For creators working inside the consent side, there are tools that help push the work past its prompt origin. Stem separation platforms like LALAL.AI let producers tear AI output apart and rebuild it with enough human intervention that the result stops sounding like a template. The work still has to stand on its own when the labels and the prompts are stripped away.

The industry keeps talking about this as one argument. It is really two entirely different industries using the same tool. Right now one is winning and one is losing, and the real problem is that most listeners can not tell them apart.

At VoteMyAI we rate tracks blind. No artist name, no tool name, no marketing copy. Just the music and the listener. When trust in AI work is sitting at 3 percent, the only number that still means anything is what happens when a song has to earn its ears on sound alone. For the full set of tools being used on both sides of the split, see the creator resources page.