Staff at the UK’s national institute for artificial intelligence (AI) have warned the charity is at risk of collapse, after Technology Secretary Peter Kyle threatened to withdraw its funding.
Workers at the Alan Turing Institute raised a series of “serious and escalating concerns” in a whistleblowing complaint submitted to the Charity Commission.
The complaint, seen by the BBC, accuses the institute’s leadership of misusing public funds, overseeing a “toxic internal culture”, and failing to deliver on the charity’s mission.
A government spokesperson said Kyle “has been clear he wants [the Turing Institute] to deliver real value for money for taxpayers”.
The Department for Science, Innovation & Technology (DSIT) spokesperson said the institute “is an independent organisation and has been consulting on changes to refocus its work under its Turing 2.0 strategy”.
“The changes set out in his letter would do exactly that, giving the Institute a key role in safeguarding our national security and positioning it where the British public expects it to be,” they said.
It comes after Kyle urged the Turing Institute to focus on defence research and suggested funding would be pulled unless it changed.
Kyle also wants an overhaul of its leadership. Any shift to focusing on defence would be a significant pivot for the publicly funded organisation, which was given a grant of £100m by the previous Conservative government last year.
Founded in 2015 as the UK’s leading centre of AI research, the Turing Institute has been rocked by internal discontent and criticism of its research activities.
In the complaint, the staff said Kyle’s letter had triggered “a crisis in governance”.
The government’s £100m grant was “now at risk of being withdrawn, a move that could lead to the institute’s collapse”, the complaint said.
The Turing Institute told the BBC it was undertaking “substantial organisational change to ensure we deliver on the promise and unique role of the UK’s national institute for data science and AI”.
“As we move forward, we’re focused on delivering real world impact across society’s biggest challenges, including responding to the national need to double down on our work in defence, national security and sovereign capabilities,” said a spokesperson.
The BBC has been told the Turing Institute has not received notification of a complaint and has not seen the letter sent by staff.
A Charity Commission spokesperson said: “We are currently assessing concerns raised about the Alan Turing Institute to determine any regulatory role for us.”
They said it is in the early stages of this assessment and has not decided whether to launch a formal legal investigation.
Internal turmoil
The staff said they had submitted the complaint anonymously “due to a well-founded fear of retaliation”.
The BBC was sent a copy of the complaint in an email signed off by “concerned staff members at The Alan Turing Institute”.
The complaint sets out a summary of eight issues.
Warning of a risk to funding, the complaint said the Turing Institute’s “ongoing delivery failures, governance instability and lack of transparency have triggered serious concerns among its public and private funders”.
It accuses the charity of making “a series of spending decisions that lack transparency, measurable outcomes, and evidence of trustee oversight”.
And in other allegations, the complaint accuses the board of presiding over “an internal culture that has become defined by fear and defensiveness”.
The complaint said the concerns had been raised with the Turing Institute’s leadership team – including chairman Doug Gurr – and claimed “no meaningful action has been taken”.
The Alan Turing Institute describes itself as the UK’s national body for data science and AI. It was set up by former Prime Minister David Cameron in 2015.
The institute has been in turmoil for months over moves to cut dozens of jobs and scrap research projects.
At the end of 2024, 93 members of staff signed a letter expressing lack of confidence in its leadership team.
‘Need to modernise’
In March, Jean Innes, who was appointed chief executive in July 2023, told the Financial Times the Turing Institute needed to modernise and focus on AI projects.
Until recently, its work has focused on AI and data science research in three main areas – environmental sustainability, health and national security.
Recent research projects listed on its website include the use of artificial technology in weather prediction, and a study suggesting one in four children now use the tech to study and play.
Others who have worked with the Turing Institute told the BBC there are concerns within the wider research community about its direction.
In July, professors Helen Margetts and Cosmina Dorobantu, long-standing co-directors of a successful programme which helped the public sector use AI, quit their positions at the charity.
Former chief technology officer Jonathan Starck left the organisation in May after eight months.
And some of its remaining staff describe a toxic internal culture.
The AI sector is a key part of the government’s strategy to grow the UK economy – investing in the development of data centres and supercomputers and is encouraging big tech firms to invest.
Research and development of this rapidly evolving tech is also crucial.
In his letter to the Turing last month, Kyle said boosting the UK’s AI capabilities was “critical” to national security and should be at the core of the institute’s activities.
The secretary of state for science and technology said there could be a review of the ATI’s “longer-term funding arrangement” next year.
Anthropic told a San Francisco federal judge on Friday that it has agreed to pay $1.5 billion US to settle a class-action lawsuit from a group of authors who accused the artificial intelligence company of using pirated copies of their books to train its AI chatbot, Claude, without permission.
Anthropic and the plaintiffs in a court filing asked U.S. District Judge William Alsup to approve the settlement, after announcing the agreement in August without disclosing the terms or amount.
“If approved, this landmark settlement will be the largest publicly reported copyright recovery in history, larger than any other copyright class action settlement or any individual copyright case litigated to final judgment,” the plaintiffs said in the filing.
The proposed deal marks the first settlement in a string of lawsuits against tech companies including OpenAI, Microsoft and Meta Platforms over their use of copyrighted material to train generative AI systems.
As part of the settlement, Anthropic said it will destroy downloaded copies of books acquired through pirating sites LibGen and PiLiMi (Pirate Library Mirror). Under the deal it could still face infringement claims related to material produced by the company’s AI models.
In a statement, Anthropic said the company is “committed to developing safe AI systems that help people and organizations extend their capabilities, advance scientific discovery, and solve complex problems.” The agreement does not include an admission of liability.
Around 500,000 works are covered in the settlement, according to the Authors Guild, meaning an estimated $3,000 US will go to each author. (Morakot Kawinchan/Shutterstock)
“This historic settlement is a vital step in acknowledging that AI companies cannot simply steal authors’ creative work to build their AI just because they need books to develop quality LLMs,” Authors Guild CEO Mary Rasenberger said in a statement.
“These vastly rich companies, worth billions, stole from those earning a median income of barely $20,000 [US] a year. This settlement sends a clear message that AI companies must pay for the books they use just as they pay for the other essential components of their LLMs.”
Although an estimated seven million books were downloaded by Anthropic from piracy sites, according to the Authors Guild, only around 500,000 works are covered in the class action, meaning the settlement amounts to roughly $3,000 US per author.
Writers Andrea Bartz, Charles Graeber and Kirk Wallace Johnson filed the class action against Anthropic last year. They argued that the company, which is backed by Amazon and Alphabet, unlawfully used millions of pirated books to teach its AI assistant Claude to respond to human prompts.
Creative work stolen
The writers’ allegations echoed dozens of other lawsuits brought by authors, news outlets, visual artists and others who say that tech companies stole their work to use in AI training.
The companies have argued their systems make fair use of copyrighted material to create new, transformative content.
Alsup ruled in June that Anthropic made fair use of the books to train Claude, but found that the company violated their rights by saving more than seven million pirated books to a “central library” that would not necessarily be used for that purpose.
B.C. author leads lawsuits alleging big tech used writers’ works to train AI
A best-selling Vancouver author has launched a class-action lawsuit against NVIDIA, Meta and two other tech giants.
J.B. MacKinnon claims that books he and other Canadian authors wrote, were illegally used to train artificial intelligence models.
A trial was scheduled to begin in December to determine how much Anthropic owed for the alleged piracy, with potential damages ranging into the hundreds of billions of dollars.
The pivotal fair-use question is still being debated in other AI copyright cases.
Vancouver author J.B. MacKinnon recently launched class-action lawsuits against NVIDIA, Meta, Anthropic and Databricks Inc. in B.C. Supreme Court, alleging that his and other Canadian authors’ works have been used illegally for AI training.
Another San Francisco judge hearing a similar ongoing lawsuit against Meta ruled shortly after Alsup’s decision that using copyrighted work without permission to train AI would be unlawful in “many circumstances.”
Cristóbal Valenzuela is the co-founder of AI firm Runway — which is bound to make plenty of people in Hollywood bristle. But he says studios and independent filmmakers are regularly using AI tools. And while he concedes that artificial intelligence will lead to some job losses, he argues that ultimately it will be a boon to filmmakers.
“AI is not The Terminator. AI is not Black Mirror. AI is not God. It’s a technology that can be very powerful for you to leverage,” Valenzuela clarifies. “It has challenges like any other technology, but you are in control. Humans are in control, like they’ve always been.”
Valenzuela discusses why studios like Lionsgate, Netflix, and Disney are already using his company’s tools. The Chilean-born developer also compares the current backlash against AI to another major industry upheaval: the arrival of sound in film.
A leading news website has removed dozens of articles after apparently being conned by bogus “journalists”—who may have been assisted in their deception by AI.
Business Insider quietly deleted at least 34 articles written under 13 different bylines after admitting it had published two articles written by a phony “journalist” who used the fake name “Margaux Blanchard.”
Now it has deleted dozens more written by “Tim Stevensen,” “Nate Giovanni,” “Nathan Giovanni,” “Amarilis J. Yera,” “Onyeka Nwelue,”“Alice Amayu,” “Mia Brown,” “Tracy Miller,” “Margaret Awano,” “Erica Mayor,” “Kalmar Theodore,” “Lauren Bennett,” “Louisa Eunice,” and “Alyssa Scott.” All were replaced with a single-sentence note saying they “didn’t meet Business Insider’s standards.”
A similar note has replaced each erased essay on Business Insider’s website. Business Insider
A review by the Daily Beast has found the articles which Business Insider deleted were all “personal essays,” for which the outlet pays between $200 and $300. The first was published in April 2024 and the most recent in August, days before “Margaux Blanchard’s” scam came to light.
Among the topics the apparently bogus “essayists” covered were “I’m 38 and live in a retirement village”; “Costco Next is the chain’s best-kept secret that’s free for members. I’ve already saved thousands of dollars using it.”; “I had a meltdown in front of my 5 kids.”; and—possibly ironically—“I was accepted into a well-regarded graduate program. I turned down the offer because AI is destroying my desired industry.”
The Beast’s review found several red flags within the since-deleted essays that suggest the writing did not reflect the authors’ lived experiences. This included contradictory information in separate essays by the same author, such as changing the gender and ages of their supposed children, and author-contributed photos that reverse-image searches confirm were pulled from elsewhere online.
The author of an erased essay claimed she purchased this house an hour outside Houston in 2019 for $245,000, when she was 24. A reverse image search revealed that the home was being marketed this summer as a new build with an asking price of $379,000 in Dallas. Wayback Machine
The author “Tim Stevensen” claimed in one piece to have two daughters and a son, but four months later, he had “sons.” “Stevensen” was possibly the most prolific and contradictory of the “essayists.” In seven articles he detailed how he had met his wife eight years ago; that he and his wife had children in their twenties; that he had worked 20-hour shifts for years; that he had been a high-school teacher for a decade before recently quitting to be a freelance writer; that he had “unpaid bills”; and that he and his wife wagered $5,000 for a weight-loss challenge.
Another article by “Stevensen” included a photo that he had supplied, which claimed to show him and his daughters. A reverse-image search revealed that the photo was of a man named Stowe Gregory, who wrote a personal essay months earlier for the i newspaper in the U.K. about his love for his step-daughters. The only Tim Stevensen listed in the U.S. did not respond to the Daily Beast, but is not a former high-school teacher.
The author “Tim Stevensen” submitted this photo to Business Insider and claimed it was him with his two daughters. The same image was published by a London newspaper months prior, having been submitted by a man named Gregory Stowe. Wayback Machine
An internal note to staff from the site’s editor-in-chief, Jamie Heller, stated that the questionable essays were removed “due to concerns about the authors’ identity or veracity.” Heller’s note, first obtained by Semafor, said no articles written by its staff had been affected by Tuesday’s purge. The internal communication added that the site’s verification protocols have since been “bolstered.” A spokesperson for Business Insider declined to comment further, but a company source said the site publishes around 70,000 articles a year, making the deleted articles a tiny proportion of its output.
Jamie Heller became editor-in-chief of Business Insider on Sept. 9, 2024. She had previously worked at the Wall Street Journal. Joy Malone/Getty Images
Heller became editor-in-chief of the site—owned by German media company Axel Springer, which also owns Politico—in September 2024, when the apparent cons were already underway, although the majority were published after her appointment.
It is unclear whether or to what extent the deleted articles had used AI to generate their content. The Daily Beast used AI detection software and found that the nixed essays did not register as being written word-for-word by AI.
Mathias Döpfner runs the German-based Axel Springer. Matthias Nareyek/Getty
However, the articles are littered with unlikely facts and odd phrases, which could point to the use of generative AI. One “writer” claimed she lived in Houston, Texas, and that it took an hour without a car to get to “nearby cities,” another described retirement as “glory days,” and one wrote about “apple pie” and “diners” being part of Australian life. One claimed to have been a teacher who was “summoned” to speak to the principal and told he had been “chosen to represent the school in Canada, which meant I would be away from my family for six to 12 months.”
The Daily Beast was unable to reach any of the supposed authors—some of whom have been published elsewhere, including one who claims to live in both the United Kingdom and Appalachia—for comment, leaving the motive for an apparent con a mystery. At least three of the bylines also appear on articles in writersweekly.com, offering tips on how to become a freelance writer.
“Nate Giovanni” had a whirlwind of personal essays published by Business Insider in the past year, as his since-deleted author profile shows. Another erased essay was also published on Business Insider under the name “Nathan Giovanni.” Wayback Machine
Author “Nate Giovanni,” also credited “Nathan Giovanni,” had at least five deleted essays. In a December essay about convincing his wife to have a third child in their 40s, “Giovanni” wrote that he had two daughters, Leila and Sophia, and a two-year-old son named Mason. In an essay published in March, he had two sons, and his wife was at home with a newborn. In May, he wrote that he and his wife had been traveling the world as house sitters for the last two years, including a two-week stay at a “Rustic Villa in Tuscany” and trips to destinations like Charleston, Oregon, New Mexico, New York, Australia, Canada, and Merida, Mexico. His grasp of geography seemed odd.
“Some memorable countries we’ve visited include London, for a quick three-day experience with a house cat. We made it to the London Bridge,” one article states. By July, “Giovanni” was no longer a world traveler: He had quit being a high school English teacher and was in the aftermath of losing his job at a failed startup.
“Amarilis J. Yera” wrote last month about buying a home about an hour outside of Houston six years earlier, when she was 24. However, a submitted photo of the home’s exterior was that of a new-build property that was sold this summer in Dallas, over a month before “Yera’s” essay was published. The essay included photos that were supposedly of the home’s interior, but a reverse image search showed that identical photos were posted months earlier in a Kenya-based Facebook group.
The essay included a selfie submitted by the author. An editor with an almost identical name, Amaralis Yera, lives in Puerto Rico. She could not be reached for comment, but her professional headshot on LinkedIn shows that they are not the same person. Records show there is no other “Amaralis Yera” living in the United States.
The author “Amarilis J. Yera” tried to pass off the kitchen on the left, pulled from a Kenyan Facebook group, as being the kitchen of her non-existent Texas home. The actual kitchen for the house that she claimed was hers—which sold in July—can be seen on the right. Wayback Machine/Realtor.com
Another author was listed as “Onyeka Nwelue,” the same name as a Nigerian-born author who went viral in 2023 for falsely claiming he was a professor at the University of Oxford and the University of Cambridge in England. The bogus professor himself then claimed that other scammers have used his identity and photos.
In her note to staff, Heller said the internal probe was launched after trade newspaper Press Gazette revealed that two Business Insider essays published in April—written by a “Margaux Blanchard”—were “likely” filled with made-up anecdotes that were AI-generated and that “Blanchard” was fake. The emergence of generative AI appears to have led to a spike in articles being published under bogus names.
Five other outlets, including WIRED, were duped by “Margaux Blanchard,” Press Gazette reported. Their true identity remains unknown.