AI-generated song “Jag vet, du är inte min” has reached the top of Spotify’s Sweden chart under the name Jacub — an artist that, according to Swedish reporting, does not exist as a real-world performer. The track’s success, with millions of streams in early January 2026, is testing how streaming platforms, rights organisations and regulators draw the line between human authorship and synthetic production.
What listeners know about Jacub — and what remains unclear
Jacub is presented on Spotify like any other pop act: a profile, a catalogue of tracks, and a #1 position in Sweden’s most-played rankings. What is missing is a public identity that can be verified. Swedish journalist Emanuel Karlsten reported that the music was created with the help of AI tools, and that “Team Jacub” described the project as a collective of experienced creators rather than a single artist.
That framing matters because it shifts the story away from the idea of a fully automated song and toward a hybrid workflow: humans steering a project, and AI supporting parts of the process. For listeners, however, the distinction is not always visible. Spotify’s charts do not automatically indicate whether a track was generated, assisted, or performed by machines.

How “Team Jacub” describes the role of AI
In messages cited by Swedish media, “Team Jacub” rejects the idea that they simply generated a finished song with one click. They describe a longer, human-controlled creative process in which AI was used as a tool.
This is one of the central disputes around AI music today: whether AI is being used like a synthesiser, Auto-Tune or a digital audio workstation — technology that expands creative choices — or whether it is replacing authorship by generating the expressive elements that copyright typically protects.
Why STIM registration became part of the controversy
In Sweden, the debate quickly moved from charts to rights management. STIM — the Swedish performing rights organisation — has stated that music created by AI is not protected by copyright and should therefore not be registered as a protected work.
“Team Jacub” has argued that registration was justified because the track was made through a human-led process, with AI used only as support. Swedish reporting has also linked the credits behind the song to multiple individuals listed in STIM’s database, fuelling a broader argument: if a hit can be marketed as an artist persona while being built by a production collective with AI assistance, the industry may need clearer, auditable standards for what counts as authorship.
What Spotify’s AI policies can and cannot solve
Spotify has been tightening its approach to AI-related risks, largely in response to concerns about impersonation, misleading metadata and catalogue “spam” designed to siphon royalties. Those measures are built to protect users and professional creators from fraud.
But the Jacub case is not primarily about obvious scams or deepfake vocals. It is about disclosure and transparency: whether listeners should be told when AI tools played a significant role, and how platforms should respond when the “artist” is effectively a brand rather than an identifiable musician.
A Nordic test case for the EU’s next transparency rules
Sweden is already a policy laboratory for music and technology. STIM has recently pushed licensing models that aim to make AI training more transparent and compensatory for rightsholders.
At EU level, the direction is also moving toward clearer marking and labelling of AI-generated content, with implementation timelines that will increasingly affect platforms and distributors operating across the single market.
If AI-assisted music continues to break through in mainstream charts, Nordic regulators and industry bodies are likely to face a practical question sooner than many others in Europe: how to protect creative labour without treating every new tool as illegitimate.





