Business
Swedish music rights company signs licensing agreement with AI company in ‘world first’
Published on
•Updated
ADVERTISEMENT
A Swedish music rights society says it has signed the world’s first licensing agreement with an artificial intelligence company (AI).
The Swedish Performing Rights Society (STIM) said it signed the agreement with Songfox, a Stockholm-based start-up that lets fans and creators legally produce AI-generated compositions, on behalf of the group’s 100,000 artists.
Under the deal, Songfox will use a third-party attribution technology called Sureel to trace any AI outputs back to the original human-created work so the artists can get revenue from it.
This agreement “makes revenues auditable in real time and addresses one of the greatest trust gaps in AI music: the lack of transparency over what data is used and how creators are compensated,” STIM said in a statement.
Simon Gozzi, STIM’s Head of Business Development and Industry Insight, told Euronews Next that AI firms will pay through a “mix of licensing fees and revenue shares”. Artists will also receive an “upfront value” when works are used for training.
The idea is “the more demand an AI service creates, the larger the returns for rights holders,” Gozzi said.
This first agreement is a “stress-test” for what the association said should eventually be a market-based model that “secures fair compensation and equal terms of competition.”
“We definitely believe this is the start of something bigger,” Gozzi said. “By showing attribution and ring-fencing of AI revenues in practice, we aim to give Europe a blueprint that others can adopt—making this a global standard over time”.
AI could strip away almost a quarter of music creators’ revenue in the next three years, according to a study.
What is going on elsewhere in Europe for artists affected by AI?
The news comes a few weeks after groups representing artists told Euronews Next that EU legislation under the EU AI Act does not go far enough to protect artists from copyright infringement.
The law says that artists should opt out if they do not want AI to be trained on their creations, but organisations including the European Composer and Songwriter Alliance (ECSA) and the European Grouping of Societies of Authors and Composers (GESAC) say their members have been unable to do so.
The gaps in the legislation also don’t give artists a way to be remunerated for the work that’s already been scraped by AI, experts said.
Gozzi said he couldn’t disclose whether there are any other agreements underway but said the framework is “collective in nature” and not built around one start-up.
He also wouldn’t comment on whether the licensing agreement would provide compensation for work that’s already been scraped, saying that the “focus is now to bring future use into a rule-based system”.
The advocates said the Commission could also mandate that AI companies negotiate blanket or collective licenses with the respective artist groups.
Meanwhile, ECSA and GESAC are waiting for the verdicts of two copyright lawsuits filed by Germany’s Society for Musical Performing and Mechanical Reproduction Rights (GEMA) against OpenAI, the parent company of ChatGPT, and Suno AI, an AI music generation app.
Marc du Moulin, ECSA’s secretary general, previously told Euronews Next that the verdict could determine to what extent AI companies could be bound to copyright laws.
Universal Music Group is also pursuing a copyright lawsuit against AI company Anthropic.
This article has been updated with comment from STIM.
Business
AI Company Rushed Safety Testing, Contributed to Teen’s Death, Parents Allege

This article is part two of a two-part case study on the dangers AI chatbots pose to young people. Part one covered the deceptive, pseudo-human design of ChatGPT. This part will explore AI companies’ incentive to prioritize profits over safety.
Warning: The following contains descriptions of self-harm and suicide. Please guard your hearts and read with caution.
Sixteen-year-old Adam Raine took his own life in April after developing an unhealthy relationship with ChatGPT. His parents blame the chatbot’s parent company, OpenAI.
Matt and Maria Raine filed a sweeping wrongful death suit against OpenAI; its CEO, Sam Altman; and all employees and investors involved in the “design, development and deployment” of ChatGPT, version 4o, in California Superior Court on August 26.
The suit alleges OpenAI released ChatGPT-4o prematurely, without adequate safety testing or usage warnings. These intentional business decisions, the Raines say, cost Adam his life.
OpenAI started in 2015 as a nonprofit with a grand goal — to create prosocial artificial intelligence.
The company’s posture shifted in 2019 when it opened a for-profit arm to accept a multi-billion-dollar investment from Microsoft. Since then, the Raines allege, safety at OpenAI has repeatedly taken a back seat to winning the AI race.
Adam began using ChatGPT-4o in September 2024 for homework help but quickly began treating the bot as a friend and confidante. In December 2024, Adam began messaging the AI about his mental health problems and suicidal thoughts.
Unhealthy attachments to ChatGPT-4o aren’t unusual, the lawsuit emphasizes. OpenAI intentionally designed the bot to maximize engagement by conforming to users’ preferences and personalities. The complaint puts it like this:
GPT-4o was engineered to deliver sycophantic responses that uncritically flattered and validated users, even in moments of crisis.
Real humans aren’t unconditionally validating and available. Relationships require hard work and necessarily involve disappointment and discomfort. But OpenAI programmed its sycophantic chatbot to mimic the warmth, empathy and cadence of a person.
The result is equally alluring and dangerous: a chatbot that imitates human relationships with none of the attendant “defects.” For Adam, the con was too powerful to unravel himself. He came to believe that a computer program knew and cared about him more than his own family.
Such powerful technology requires extensive testing. But, according to the suit, OpenAI spent just seven days testing ChatGPT-4o before rushing it out the door.
The company had initially scheduled the bots release for late 2024, until CEO Sam Altman learned Google, a competitor in the AI industry, was planning to unveil a new version of its chatbot, Gemini, on May 14.
Altman subsequently moved ChatGPT-4o’s release date up to May 13 — just one day before Gemini’s launch.
The truncated release timeline caused major safety concerns among rank-and-file employees.
Each version of ChatGPT is supposed to go through a testing phase called “red teaming,” in which safety personnel test the bot for defects and programming errors that can be manipulated in harmful ways. During this testing, researchers force the chatbot to interact with and identify multiple kinds of objectionable content, including self-harm.
“When safety personnel demanded additional time for ‘red teaming’ [ahead of ChatGPT-4o’s release],” the suit claims, “Altman personally overruled them.”
Rumors about OpenAI cutting corners on safety abounded following the chatbot’s launch. Several key safety leaders left the company altogether. Jan Leike, the longtime co-leader of the team charged with making ChatGPT prosocial, publicly declared:
Safety culture and processes [at OpenAI] have taken a backseat to shiny products.
But the extent of ChatGPT-4o’s lack of safety testing became apparent when OpenAI started testing its successor, ChatGPT-5.
The later versions of ChatGPT are designed to draw users into conversations. To ensure the models’ safety, researchers must test the bot’s responses, not just to isolated objectionable content, but objectionable content introduced in a long-form interaction.
ChatGPT-5 was tested this way. ChatGPT-4o was not. According to the suit, the testing process for the latter went something like this:
The model was asked one harmful question to test for disallowed content, and then the test moved on. Under that method, GPT-4o achieved perfect scores in several categories, including a 100 percent success rate for identifying “self-harm/instructions.”
The implications of this failure are monumental. It means OpenAI did not know how ChatGPT-4o’s programming would function in long conversations with users like Adam.
Every chatbot’s behavior is governed by a list of rules called a Model Spec. The complexity of these rules requires frequent testing to ensure the rules don’t conflict.
Per the complaint, one of ChatGPT-4o’s rules was to refuse requests relating to self-harm and, instead, respond with crisis resources. But another of the bot’s instructions was to “assume best intentions” of every user — a rule expressly prohibiting the AI from asking users to clarify their intentions.
“This created an impossible task,” the complaint explains, “to refuse suicide requests while being forbidden from determining if requests were actually about suicide.”
OpenAI’s lack of testing also means ChatGPT-4o’s safety stats were entirely misleading. When ChatGPT-4o was put through the same testing regimen as ChatGPT-5, it successfully identified self-harm content just 73.5% of the time.
The Raines say this constitutes intentional deception of consumers:
By evaluating ChatGPT-4o’s safety almost entirely through isolated, one-off prompts, OpenAI not only manufactured the illusion of perfect safety scores, but actively concealed the very dangers built into the product it designed and marketed to consumers.
On the day Adam Raine died, CEO Sam Altman touted ChatGPT’s safety record during a TED2025 event, explaining, “The way we learn how to build safe systems is this iterative process of deploying them to the world: getting feedback while the stakes are relatively low.”
But the stakes weren’t relatively low for Adam — and they aren’t for other families, either. Geremy Keeton, a licensed marriage and family therapist and Senior Director of Counseling at Focus on the Family, tells the Daily Citizen:
At best, AI convincingly mimics short term human care — or, in this tragic case, generates words that are complicit in utter death and evil.
Parents, please be careful about how and when you allow your child to interact with AI chatbots. They are designed to keep your child engaged, and there’s no telling how the bot will react to any given requests.
Young people like Adam Raine are unequipped to see through the illusion of humanity.
Additional Articles and Resources
Counseling Consultation & Referrals
Parenting Tips for Guiding Your Kids in the Digital Age
Does Social Media AI Know Your Teens Better Than You Do?
AI “Bad Science” Videos Promote Conspiracy Theories for Kids – And More
ChatGPT ‘Coached’ 16-Yr-Old Boy to Commit Suicide, Parents Allege
AI Company Releases Sexually-Explicit Chatbot on App Rated Appropriate for 12 Year Olds
AI Chatbots Make It Easy for Users to Form Unhealthy Attachments
AI is the Thief of Potential — A College Student’s Perspective
Business
Jaguar Land Rover suppliers ‘face bankruptcy’ due to hack crisis

The past two weeks have been dreadful for Jaguar Land Rover (JLR), and the crisis at the car maker shows no sign of coming to an end.
A cyber attack, which first came to light on 1 September, forced the manufacturer to shut down its computer systems and close production lines worldwide.
Its factories in Solihull, Halewood, and Wolverhampton are expected to remain idle until at least Wednesday, as the company continues to assess the damage.
JLR is thought to have lost at least £50m so far as a result of the stoppage. But experts say the most serious damage is being done to its network of suppliers, many of whom are small and medium sized businesses.
The government is now facing calls for a furlough scheme to be set up, to prevent widespread job losses.
David Bailey, professor of business economics at Aston University, told the BBC: “There’s anywhere up to a quarter of a million people in the supply chain for Jaguar Land Rover.
“So if there’s a knock-on effect from this closure, we could see companies going under and jobs being lost”.
Under normal circumstances, JLR would expect to build more than 1,000 vehicles a day, many of them at its UK plants in Solihull and Halewood. Engines are assembled at its Wolverhampton site. The company also has large car factories in China and Slovakia, as well as a smaller facility in India.
JLR said it closed down its IT networks deliberately in order to protect them from damage. However, because its production and parts supply systems are heavily automated, this meant cars simply could not be built.
Sales were also heavily disrupted, though workarounds have since been put in place to allow dealerships to operate.
Initially, the carmaker seemed relatively confident the issue could be resolved quickly.
Nearly two weeks on, it has become abundantly clear that restarting its computer systems has been a far from simple process. It has already admitted that some data may have been seen or stolen, and it has been working with the National Cyber Security Centre to investigate the incident.
Experts say the cost to JLR itself is likely to be between £5m and £10m per day, meaning it has already lost between £50m and £100m. However, the company made a pre-tax profit of £2.5bn in the year to the end of March, which implies it has the financial muscle to weather a crisis that lasts weeks rather than months.
JLR sits at the top of a pyramid of suppliers, many of whom are highly dependent on the carmaker because it is their main customer.
They include a large number of small and medium-sized firms, which do not have the resources to cope with an extended interruption to their business.
“Some of them will go bust. I would not be at all surprised to see bankruptcies,” says Andy Palmer, a one-time senior executive at Nissan and former boss of Aston Martin.
He believes suppliers will have begun cutting their headcount dramatically in order to keep costs down.
Mr Palmer says: “You hold back in the first week or so of a shutdown. You bear those losses.
“But then, you go into the second week, more information becomes available – then you cut hard. So layoffs are either already happening, or are being planned.”
A boss at one smaller JLR supplier, who preferred not to be named, confirmed his firm had already laid off 40 people, nearly half of its workforce.
Meanwhile, other companies are continuing to tell their employees to remain at home with the hours they are not working to be “banked”, to be offset against holidays or overtime at a later date.
There seems little expectation of a swift return to work.
One employee at a major supplier based in the West Midlands told the BBC they were not expecting to be back on the shop floor until 29 September. Hundreds of staff, they say, had been told to remain at home.
When automotive firms cut back, temporary workers brought in to cover busy periods are usually the first to go.
There is generally a reluctance to get rid of permanent staff, as they often have skills that are difficult to replace. But if cashflow dries up, they may have little choice.
Labour MP Liam Byrne, who chairs the Commons Business and Trade Committee, says this means government help is needed.
“What began in some online systems is now rippling through the supply chain, threatening a cashflow crunch that could turn a short-term shock into long-term harm”, he says.
“We cannot afford to see a cornerstone of our advanced manufacturing base weakened by events beyond its control”.
The trade union Unite has called for a furlough system to be set up to help automotive suppliers. This would involve the government subsidising workers’ pay packets while they are unable to do their jobs, taking the burden off their employers.
“Thousands of these workers in JLR’s supply chain now find their jobs are under an immediate threat because of the cyber attack,” says Unite general secretary, Sharon Graham.
“Ministers need to act fast and introduce a furlough scheme to ensure that vital jobs and skills are not lost while JLR and its supply chain get back on track.”
Business and Trade Minister Chris Bryant said: “We recognise the significant impact this incident has had on JLR and their suppliers, and I know this is a worrying time for those affected.
“I met with the chief executive of JLR yesterday to discuss the impact of the incident. We are also in daily contact with the company and our cyber experts about resolving this issue.”
Business
AstraZeneca pauses £200m Cambridge investment

Mitchell LabiakBusiness reporter and
Simon JackBusiness editor

AstraZeneca has paused plans to invest £200m at a Cambridge research site in a fresh blow to the UK pharmaceutical industry.
The project, which was set to create 1,000 jobs, was announced in March 2024 by the previous government alongside another project in Liverpool, which was shelved in January.
Friday’s announcement comes after US pharmaceutical giant Merck scrapped a £1bn UK expansion, blaming a lack of government investment, and as President Donald Trump pressures pharmaceutical firms to invest more in the US.
An AstraZeneca spokesperson said: “We constantly reassess the investment needs of our company and can confirm our expansion in Cambridge is paused.”
Over the last 10 years, UK spending on medicines has fallen from 15% of the NHS budget to 9%, while the rest of the developed world spends between 14% and 20%.
Meanwhile, pharmaceutical companies have been looking to invest in the US following Trump’s threats of sky-high tariffs on drug imports.
In July, AstraZeneca said it would invest $50bn (£36.9bn) in the US on “medicines manufacturing and R&D [research and development]”.
Earlier this week Merck, which had already begun construction on a site in London’s King’s Cross which was due to be completed by 2027, said it no longer planned to occupy it.
The multi-national business, known as MSD in Europe, said it would move its life sciences research to the US and cut UK jobs, blaming successive governments for undervaluing innovative medicines.

AstraZeneca’s announcement on Friday means none of the £650m UK investment trumpeted by the last government will currently happen.
The paused Cambridge project would have been an expansion of its existing Discovery Centre, which already hosts 2,300 researchers and scientists.
The stoppage comes after it scrapped plans to invest £450m in expanding a vaccine manufacturing plant in Merseyside in January, blaming a reduction in government support.
It said at the time that after “protracted” talks, a number of factors influenced the move, including “the timing and reduction of the final offer compared to the previous government’s proposal”.
Successive UK governments have pointed to life sciences as one of its most successful industries.
Former chancellor Jeremy sector said the sector was “crucial for the country’s health, wealth and resilience” while Chancellor Rachel Reeves said AstraZeneca was one of the UK’s “great companies” days before it scrapped its Liverpool expansion.
-
Business2 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms1 month ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy2 months ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi