SUNO IS SHUTTING DOWN ITS CURRENT MODELS. HERE'S WHAT IT MEANS FOR AI MUSIC

Suno, the company that let millions of people generate full songs from a text prompt, is pulling the plug on the models that made it famous. The existing models, trained on unlicensed music, are being retired. In their place: a new generation built entirely on licensed training data. For the AI music world, this is not a minor version bump. It is a reset.

The move follows a settlement with Warner Music Group in November 2025 that ended what could have been a landmark copyright lawsuit. The terms were never fully disclosed, but the outcome was clear. Suno agreed to transition away from models trained on copyrighted material without permission. In return, Warner dropped the case and entered a licensing partnership that gives Suno legal access to one of the largest music catalogs in the world. Universal Music Group and Sony Music remain in active litigation against Suno. Only Warner has settled.

The stakes are not small. Suno raised $250 million in a Series C round, valuing the company at $2.45 billion. That kind of money buys time and legal firepower, but it also raises the pressure to reach licensing agreements with the remaining majors. A company valued at nearly two and a half billion dollars cannot afford to operate indefinitely on models that two of the three major labels consider infringing.

THE DEAL THAT CHANGED EVERYTHING

The Suno/Warner settlement was more than a legal ceasefire. It restructured how the company operates. Warner Music Group had joined the wave of major-label lawsuits in 2024, alleging that Suno's models were trained on copyrighted recordings without authorization. Rather than fight it out in court, Suno chose the licensing route. UMG and Sony, however, have not followed Warner's lead. Their cases against Suno remain active, and the outcome will shape what the licensed model era actually looks like.

As part of the Warner agreement, Suno acquired Songkick, the live events discovery platform. An important distinction: Songkick's ticketing business was sold to Live Nation back in 2017. What Warner held onto was the app and the brand, focused on helping fans discover concerts and track artists. That is what Suno bought. On the surface, buying a concert discovery app seems like an odd move for an AI music generator. But it signals where Suno sees its future: not just generating tracks, but building an ecosystem around music creation, discovery and live performance. If AI-generated artists ever draw real audiences, Songkick gives Suno the infrastructure to connect those listeners with events.

The old models are being phased out through 2026. Suno has not announced an exact shutdown date, but the direction is set. Every model going forward will be trained exclusively on music that has been licensed from rights holders.

WHAT CHANGES FOR USERS

This is where it gets real for the people who actually use Suno every day.

Free-tier users will no longer be able to download generated music. You can still create, play and share tracks within the platform, but the files stay on Suno's servers. No more dragging AI-generated WAVs into your DAW or uploading them to streaming platforms from a free account.

Paid-tier users keep download access, but with monthly limits. The exact numbers have not been finalized, but the days of unlimited generation and export on a flat subscription are ending. Suno needs to pay licensing fees on every track its models produce, and those costs have to land somewhere.

For creators who have been using Suno as a production tool, this changes the economics. If you need stems or want to process your AI tracks further with tools like LALAL.AI for stem separation and cleanup, you will need a paid plan. The free experimentation era is winding down.

UDIO'S PARALLEL PATH

Suno is not alone in making this pivot. Udio, its closest competitor, struck a similar licensing deal with Universal Music Group. But Udio is taking a fundamentally different strategic direction. The new Udio lets users remix existing licensed songs, create new songs in the style of specific artists, and even use artist voices through an opt-in program where participating artists license their vocal likeness to the platform.

There is a significant catch. Udio operates as a walled garden. Users cannot export or download the music they create. Everything stays on the platform. You can listen and share within Udio, but you cannot take your track to a DAW, upload it to Spotify, or use it in a video. For users who came to Udio as a creative tool for producing music they could actually use, this is a hard pivot away from what made the platform attractive in the first place.

The logic is clear from a licensing perspective. If the music never leaves the platform, UMG controls the distribution and collects its share of every stream. It is remix culture rebuilt as a closed ecosystem. Whether that resonates with creators who want ownership over their output remains to be seen.

The two deals together paint a picture of an industry finding its new shape. The major labels are not trying to kill AI music. They are trying to own the pipes.

THE LABELING PUSH

While Suno and Udio negotiate their futures with the majors, IFPI (the global trade body representing the recording industry) is pushing a different lever. Their focus: mandatory AI disclosure on streaming platforms.

The proposal would require any track that uses AI in its creation to carry a visible label on Spotify, Apple Music, Tidal and every other DSP. The argument is consumer transparency. Listeners have the right to know what they are hearing.

Apple has already moved in this direction with its AI Transparency Tags. The major labels want this to become an industry standard, not a platform-by-platform decision. IFPI's lobbying targets both the platforms and the regulators, pushing for rules that would make unlabeled AI music on streaming services a policy violation.

The practical challenge is definition. What counts as "AI-made"? A track fully generated by Suno is straightforward. But what about a human-written song with AI-assisted mixing? A vocal recorded by a human but pitch-corrected by AI? An instrumental composed by a musician but featuring AI-generated vocals from a platform like ElevenLabs? The line is blurry, and mandatory labeling forces someone to draw it.

THE BIAS PROBLEM WITH LABELS

Here is the part nobody in the labeling conversation wants to talk about. Research consistently shows that telling someone a track is AI-generated lowers their rating of it, even when the music is identical to what they would rate higher without the label. The tag does the damage before a single note plays.

This is not hypothetical. Studies on AI perception bias have replicated this finding across music, visual art and writing. The label "AI" functions as a quality discount in people's minds, regardless of what the work actually sounds like.

If the industry mandates AI labels on streaming platforms, it creates a two-tier system where AI-assisted tracks start with a handicap. That might be fine if the goal is protecting established artists from competition. It is less fine if the goal is letting listeners find the best music regardless of how it was made.

BLIND RATING AS THE ANSWER

This is exactly why VoteMyAI exists. Instead of telling listeners what to think before they listen, we let them rate tracks without knowing whether they were made by a human, an AI, or some combination of both. The music speaks first. The context comes after.

In a world where IFPI wants every AI track tagged and the labels want to control the narrative, blind rating offers something radical: an honest signal. Thousands of ratings from real listeners who judged the sound, not the source.

The results are revealing. Some AI tracks score higher than major-label releases. Some score terribly. The distribution looks a lot like human music, which is exactly the point. Quality is not determined by the tool. It is determined by the output.

Mandatory labeling solves one problem (transparency) while creating another (bias). Blind rating addresses both. You get transparency after the rating, and you get an unbiased signal during it.

WHAT HAPPENS NEXT

The AI music industry in 2026 is reorganizing around a few core realities:

Suno's relaunch is not the end of AI music. It is the beginning of AI music's second chapter, one where the rules are written by the same companies that wrote the rules for the first era of digital music. Whether that is good for creators and listeners depends entirely on how much room the new system leaves for genuine discovery and fair evaluation.

The tools are changing. The question of what sounds good has not. That is still up to the listeners.

RATE AI MUSIC WITHOUT THE BIAS

Thousands of blind ratings from real listeners. No labels, no prejudice. Just the music.

Start Rating on VoteMyAI →