AI Research
Anthropic to pay $1.5 billion to settle authors’ copyright lawsuit

Anthropic, which operates the Claude artificial intelligence app, has agreed to pay $1.5 billion to settle a class-action lawsuit by book authors who alleged the company took pirated copies of their works to train its chatbot.
The company has agreed to pay authors about $3,000 for each of an estimated 500,000 books covered by the settlement. A trio of authors — thriller novelist Andrea Bartz and nonfiction writers Charles Graeber and Kirk Wallace Johnson — sued last year, and now represent a broader group of writers and publishers whose books Anthropic downloaded to train its chatbot Claude.
The landmark settlement could mark a turning point in legal battles between AI companies and the writers, visual artists and other creative professionals who accuse them of copyright infringement. A judge could approve the settlement as soon as Monday.
“As best as we can tell, it’s the largest copyright recovery ever,” said Justin Nelson, a lawyer for the authors. “It is the first of its kind in the AI era.”
In a statement to CBS News, Anthropic Aparna Sridhar deputy general counsel said the settlement “will resolve the plaintiffs’ remaining legacy claims.”
Sridhar added that the settlement comes after the U.S. District Court for the Northern District of California in June ruled that Anthropic’s use of legally purchased books to train Claude did not violate U.S. copyright law.
“We remain committed to developing safe AI systems that help people and organizations extend their capabilities, advance scientific discovery and solve complex problems,” Sridhar said.
Anthropic, which was founded by former executives with ChatGPT developer OpenAI, introduced Claude in 2023. Like other generative AI bots, the tool lets users ask natural language questions and then provides summarized answers using AI trained on millions of books, articles and other material.
Settlement terms
If Anthropic had not settled, experts say losing the case after a scheduled December trial could have cost the San Francisco-based company even more money.
“We were looking at a strong possibility of multiple billions of dollars, enough to potentially cripple or even put Anthropic out of business,” said William Long, a legal analyst for Wolters Kluwer.
U.S. District Judge William Alsup of San Francisco has scheduled a Monday hearing to review the settlement terms.
Books are known to be important sources of data — in essence, billions of words carefully strung together — that are needed to build the AI large language models behind chatbots like Anthropic’s Claude and its chief rival, OpenAI’s ChatGPT.
Alsup’s June ruling found that Anthropic had downloaded more than 7 million digitized books that it “knew had been pirated.” It started with nearly 200,000 from an online library called Books3, assembled by AI researchers outside of OpenAI to match the vast collections on which ChatGPT was trained.
Debut thriller novel “The Lost Night” by Bartz, a lead plaintiff in the case, was among those found in the Books3 dataset.
Anthropic later took at least 5 million copies from the pirate website Library Genesis, or LibGen, and at least 2 million copies from the Pirate Library Mirror, Alsup wrote.
The Authors Guild told its thousands of members last month that it expected “damages will be minimally $750 per work and could be much higher” if Anthropic was found at trial to have willfully infringed their copyrights. The settlement’s higher award — approximately $3,000 per work — likely reflects a smaller pool of affected books, after taking out duplicates and those without copyright.
On Friday, Mary Rasenberger, CEO of the Authors Guild, called the settlement “an excellent result for authors, publishers, and rightsholders generally, sending a strong message to the AI industry that there are serious consequences when they pirate authors’ works to train their AI, robbing those least able to afford it.”
AI Research
Artificial intelligence, rising tuition discussed by educational leaders at UMD

DULUTH, Minn. (Northern News Now) – A panel gathered at UMD’s Weber Music Hall Friday to discuss the future of higher education.
The conversation touched on heavy topics like artificial intelligence, rising tuition costs, and how to provide the best education possible for students.
Almost 100 people listened to conversations on the current climate of college campuses, including UMD Associate Dean of the Swenson College of Engineering and Science Erin Sheets.
“We’re in a unique and challenging time, with respect to the federal landscape and state landscape,” said Sheets.
The three panelists addressed current national changes, including rising tuition costs and budget cuts.
“That is going to be a structural shift we really are going to have to pay attention to, if we want to continue to commit for all students to have the opportunity to attend college,” said panelist and Managing Director of Waverly Foundation Lande Ajose.
Last year alone, the University of Minnesota system was hit with a 3% budget cut on top of a loss of $22 million in federal grants. This resulted in a 6.5% tuition increase for students.
Even with changing resources, the panel emphasized helping students prepare for the future, which they said includes the integration of AI.
“As students graduate, if they are not AI fluent, they are not competitive for jobs,” said panelist and University of Minnesota President Rebecca Cunningham.
Research shows that the use of AI in the workplace has doubled in the last two years to 40%.
While AI continues to grow every day, both students and faculty are learning to use it and integrate it into their curriculum.
“These are tools, they are not a substitute for a human being. You still need the critical thinking, you need the ethical guidelines, even more so,” said Sheets.
Following the panel, UMD hosted a campus-wide celebration to mark the inauguration of Chancellor Charles Nies.
Click here to download the Northern News Now app or our Northern News Now First Alert weather app.
Copyright 2025 Northern News Now. All rights reserved.
AI Research
AI startup CEO who has hired several Meta engineers says: Reason AI researchers are leaving Meta is, as founder Mark Zuckerberg said, “Biggest risk is not taking …”

Shawn Shen, co-founder and CEO of the AI startup Memories.ai, has stated that some researchers are leaving Facebook-parent Meta due to frequent company reorganisations and a desire to take on bigger risks. Shen, who left Meta himself last year, notes that constant changes in managers and goals can be frustrating for researchers, leading them to seek opportunities at other companies and startups. Shen’s startup, which builds AI to understand visual data, recently announced a plan to offer up to $2 million compensation packages to researchers from top tech companies. Memories.ai has already hired Chi-Hao Wu, a former Meta research scientist, as its chief AI officer. Shen also referenced a statement from Meta CEO Mark Zuckerberg who earlier said that the “the biggest risk is not taking any risks.”
What startup CEO Shen said about AI researchers leaving Meta
In an interview with Business Insider, Shen said: “Meta is constantly doing reorganizations. Your manager and your goals can change every few months. For some researchers, it can be really frustrating and feel like a waste of time. So yes, I think that’s a driver for people to leave Meta and join other companies, especially startups.There’s other reasons people might leave. I think the biggest one is what Mark (Zuckerberg) has said: ‘In an age that’s evolving so fast, the biggest risk is not taking any risks. So why not do that and potentially change the world as part of a trillion-dollar company?’We have already hired Eddy Wu, our Chief AI Officer who was my manager’s manager at Meta. He’s making a similar amount to what we’re offering the new people. He was on their generative AI team, which is now Meta Superintelligence Labs. And we are already talking to a few other people from MSL and some others from Google DeepMind.”
What Shen said about hiring Meta AI researchers for his startup
Shen noted that he’s offering AI researchers who are leaving Meta pay packages of $2 million to work with his startup. He said: “It’s because of the talent war that was started by Mark Zuckerberg. I used to work at Meta, and I speak with my former colleagues often about this. When I heard about their compensation packages, I was shocked — it’s really in the tens of millions range. But it shows that in this age, AI researchers who make the best models and stand at the frontier of technology are really worth this amount of money. We’re building an AI model that can see and remember just like humans. The things that we are working on are very niche. So we are looking for people who are really, really good at the whole field of understanding video data.”He even explained that his company is prioritising hires who are willing to take more equity than cash, allowing it to preserve its financial runway. These recruits will be treated as founding members rather than employees, with compensation split between cash and equity depending on the individual, Shen added.Over the next six months, the AI startup is planning to add three to five people, followed by another five to ten within a year, alongside efforts to raise additional funding. Shen believes that investing heavily in talent will strengthen, not hinder, future fundraising.
AI Research
AARP warns of “Grandparent Scams”

MONTGOMERY, Ala. (WSFA) – While artificial intelligence is rapidly transforming our world, a troubling trend shows scammers using it to steal from seniors, specifically grandparents.
You’ve probably heard the phrase ‘seeing is believing’ your whole life. But in an age of artificial intelligence, the turn of phrase doesn’t exactly stand the test of time. When it’s in the wrong hands, this new technology can make our senior citizens, who didn’t grow up in the digital age, a vulnerable population.
“One of the ways we see that being done is with what’s known as the grandparent scam,” Jamie Harding, AARP of Alabama Communications director, said. “The grandparent scam is basically, it usually happens late at night, they’re asleep, and someone calls them purporting to be their grandchild, they’re in trouble, they need money immediately.”
However, it isn’t actually their grandchild on the other end of the phone. Scammers have used AI technology to replicate the sound of their grandchild’s voice to try to take money.
“These are very sophisticated international crime rings, and they have access to a lot of very sophisticated technology,” Harding said.
To protect your family from these scams, Harding suggests having a code word that every member of your family knows so you can be sure it’s actually your loved one calling.
She also advises you not to answer phone calls from unknown numbers and to keep your personal information off the internet.
Not reading this story on the WSFA News App? Get news alerts FASTER and FREE in the Apple App Store and the Google Play Store!
Copyright 2025 WSFA. All rights reserved.
-
Business1 week ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms4 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy1 month ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi