Connect with us

AI Research

US Judge sides with AI firm Anthropic over copyright issue

Published

on


Natalie Sherman and Lucy Hooker

BBC News

Getty Images Author Andrea Bartz at an event holding a hardback copy of her novel The Lost Night. She is wearing a black sleeveless dress with a rose pattern  Getty Images

Andrea Bartz is one of a number of writers who have taken legal action over AI

A US judge has ruled that using books to train artificial intelligence (AI) software is not a violation of US copyright law.

The decision came out of a lawsuit brought last year against AI firm Anthropic by three authors, including best-selling mystery thriller writer Andrea Bartz, who accused it of stealing her work to train its Claude AI model and build a multi-billion dollar business.

In his ruling, Judge William Alsup said Anthropic’s use of the authors’ books was “exceedingly transformative” and therefore allowed under US law.

But he rejected Anthropic’s request to dismiss the case, ruling the firm would have to stand trial over its use of pirated copies to build its library of material.

Bringing the lawsuit alongside Ms Bartz, whose novels include We Were Never Here and The Last Ferry Out, were non-fiction writers Charles Graeber, author of The Good Nurse: A True Story of Medicine, Madness and Murder and Kirk Wallace Johnson who wrote The Feather Thief.

Anthropic, a firm backed by Amazon and Google’s parent company, Alphabet, could face up to $150,000 in damages per copyrighted work.

The firm holds more than seven million pirated books in a “central library” according to the judge.

The ruling is among the first to weigh in on a question that is the subject of numerous legal battles across the industry – how Large Language Models (LLMs) can legitimately learn from existing material.

“Like any reader aspiring to be a writer, Anthropic’s LLMs trained upon works, not to race ahead and replicate or supplant them — but to turn a hard corner and create something different,” Judge Alsup wrote.

“If this training process reasonably required making copies within the LLM or otherwise, those copies were engaged in a transformative use,” he said.

He noted that the authors did not claim that the training led to “infringing knockoffs” with replicas of their works being generated for users of the Claude tool.

If they had, he wrote, “this would be a different case”.

Similar legal battles have emerged over the AI industry’s use of other media and content, from journalistic articles to music and video.

This month, Disney and Universal filed a lawsuit against AI image generator Midjourney, accusing it of piracy.

The BBC is also considering legal action over the unauthorised use of its content.

In response to the legal battles, some AI companies have responded by striking deals with creators of the original materials, or their publishers, to license material for use.

Judge Alsup allowed Anthropic’s “fair use” defence, paving the way for future legal judgements.

However, he said Anthropic had violated the authors’ rights by saving pirated copies of their books as part of a “central library of all the books in the world”.

In a statement Anthropic said it was pleased by the judge’s recognition that its use of the works was transformative, but disagreed with the decision to hold a trial about how some of the books were obtained and used.

The company said it remained confident in its case, and was evaluating its options.

A lawyer for the authors declined to comment.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

‘Existential crisis’: how Google’s shift to AI has upended the online news model | Newspapers & magazines

Published

on


When the chief executive of the Financial Times suggested at a media conference this summer that rival publishers might consider a “Nato for news” alliance to strengthen negotiations with artificial intelligence companies there was a ripple of chuckles from attendees.

Yet Jon Slade’s revelation that his website had seen a “pretty sudden and sustained” decline of 25% to 30% in traffic to its articles from readers arriving via internet search engines quickly made clear the serious nature of the threat the AI revolution poses.

Queries typed into sites such as Google, which accounts for more than 90% of the search market, have been central to online journalism since its inception, with news providers optimising headlines and content to ensure a top ranking and revenue-raising clicks.

But now Google’s AI Overviews, which sit at the top of the results page and summarise responses and often negate the need to follow links to content, as well as its recently launched AI Mode tab that answers queries in a chatbot format, have prompted fears of a “Google zero” future where traffic referrals dry up.

“This is the single biggest change to search I have seen in decades,” says one senior editorial tech executive. “Google has always felt like it would always be there for publishers. Now the one constant in digital publishing is undergoing a transformation that may completely change the landscape.”

Last week, the owner of the Daily Mail revealed in its submission to the Competition and Markets Authority’s consultation on Google’s search services that AI Overviews have fuelled a drop in click-through traffic to its sites by as much as 89%.

DMG Media and other leading news organisations, including Guardian Media Group and the magazine trade body the Periodical Publishers Association (PPA), have urged the competition watchdog to make Google more transparent and provide traffic statistics from AI Overview and AI Mode to publishers as part of its investigation into the tech firm’s search dominance.

Publishers – already under financial pressure from soaring costs, falling advertising revenues, the decline of print and the wider trend of readers turning away from news – argue that they are effectively being forced by Google to either accept deals, including on how content is used in AI Overview and AI Mode, or “drop out of all search results”, according to several sources.

On top of the threat to funding, there are concerns about AI’s impact on accuracy. While Google has improved the quality of its overviews since earlier iterations advised users to eat rocks and add glue to pizza, problems with “hallucinations” – where AI presents incorrect or fabricated information as fact – remain, as do issues with in-built bias, when a computer rather than a human decides how to summarise sources.

Google Discover has replaced search as the main source of traffic click-throughs to content. Photograph: Samuel Gibbs/The Guardian

In January, Apple promised to update an AI feature that issued untrue summaries of BBC news alerts, stamped with the corporation’s logo, on its latest iPhones; alerts incorrectly claimed that the man accused of killing a US insurance boss had shot himself and that tennis star Rafael Nadal had come out as gay.

In a blogpost last month, Liz Reid, Google’s head of search, said the introduction of AI in search was “driving more queries and quality clicks”.

“This data is in contrast to third-party reports that inaccurately suggest dramatic declines in aggregate traffic,” she said. “[These reports] are often based on flawed methodologies, isolated examples, or traffic changes that occurred prior to the rollout of AI features in search.”

However, she also said that while overall traffic to all websites is “relatively stable” she admitted that the “vast” web means that user trends are shifting traffic to different sites “resulting in decreased traffic to some sites and increased traffic to others”.

In recent years, Google Discover, which feeds users articles and videos tailored to them based on their past online activity, has replaced search as the main source of click-throughs to content.

However, David Buttle, founder of the consultancy DJB Strategies, says the service, which is also tied to publishers’ overall search deals, does not deliver the quality traffic that most publishers need to drive their long-term strategies.

“Google Discover is of zero product importance to Google at all,” he says. “It allows Google to funnel more traffic to publishers as traffic from search declines … Publishers have no choice but to agree or lose their organic search. It also tends to reward clickbaity type content. It pulls in the opposite direction to the kind of relationship publishers want.”

Meanwhile, publishers are fighting a wider battle with AI companies seeking to plunder their content to train their large language models.

The creative industry is intensively lobbying the government to ensure that proposed legislation does not allow AI firms to use copyright-protected work without permission, a move that would stop the “value being scraped” out of the £125bn sector.

skip past newsletter promotion

The Make It Fair campaign in February focused on the threat to the creative industries from generative AI. Photograph: Geoffrey Swaine/Rex

Some publishers have struck bilateral licensing deals with AI companies – such as the FT, the German media group Axel Springer, the Guardian and the Nordic publisher Schibsted with the ChatGPT maker OpenAI – while others such as the BBC have taken action against AI companies alleging copyright theft.

“It is a two-pronged attack on publishers, a sort of pincer movement,” says Chris Duncan, a former News UK and Bauer Media senior executive who now runs a media consultancy, Seedelta. “Content is disappearing into AI products without serious remuneration, while AI summaries are being integrated into products so there is no need to click through, effectively taking money from both ends. It is an existential crisis.”

While publishers are pursuing action on multiple fronts – from dealmaking and legal action to regulatory lobbying – they are also implementing AI tools into newsrooms and creating their own query-answering tools. The Washington Post and the FT have launched their own AI-powered chatbots, Climate Answers and Ask FT, that source results only from their own content.

Christoph Zimmer, chief product officer at Germany’s Der Spiegel, says that while its traffic is currently stable he expects referrals from all platforms to decline.

“This is a continuation of a longstanding trend,” he says. “However, this affects brands that have not focused on building direct relationships and subscriptions in recent years even more strongly. Instead, they have relied on reach on platforms and sometimes generic content.

“What has always been true remains true – a focus on quality and distinct content, and having a human in charge rather than just in the loop.”

One publishing industry executive says the battle to strike deals to help train AI models to aggregate and summarise stories is rapidly being superseded by advances that are seeing models interpret live news.

“The first focus has been on licensing deals for training AI, to ‘speak English’, but that is becoming less important over time,” says the executive. “It is becoming about delivering the news, and for that you need accurate live sources. That is a potentially really lucrative market which publishers are thinking about negotiating next.”

Saj Merali, chief executive of the PPA, says a fair balance needs to be struck between a tech-driven change in consumers’ digital habits and the fair value of trusted news.

“What doesn’t seem to be at the heart of this is what consumers need,” she says. “AI needs trustworthy content. There is a shift in how consumers want to see information, but they have to have faith in what they are reading.

“The industry has been very resilient through quite major digital and technological changes, but it is really important we make sure there is a route to sustain models. At the moment the AI and tech community are showing no signs of supporting publisher revenue.”



Source link

Continue Reading

AI Research

Artificial intelligence, rising tuition discussed by educational leaders at UMD

Published

on


DULUTH, Minn. (Northern News Now) – A panel gathered at UMD’s Weber Music Hall Friday to discuss the future of higher education.

The conversation touched on heavy topics like artificial intelligence, rising tuition costs, and how to provide the best education possible for students.

Almost 100 people listened to conversations on the current climate of college campuses, including UMD Associate Dean of the Swenson College of Engineering and Science Erin Sheets.

“We’re in a unique and challenging time, with respect to the federal landscape and state landscape,” said Sheets.

The three panelists addressed current national changes, including rising tuition costs and budget cuts.

“That is going to be a structural shift we really are going to have to pay attention to, if we want to continue to commit for all students to have the opportunity to attend college,” said panelist and Managing Director of Waverly Foundation Lande Ajose.

Last year alone, the University of Minnesota system was hit with a 3% budget cut on top of a loss of $22 million in federal grants. This resulted in a 6.5% tuition increase for students.

Even with changing resources, the panel emphasized helping students prepare for the future, which they said includes the integration of AI.

“As students graduate, if they are not AI fluent, they are not competitive for jobs,” said panelist and University of Minnesota President Rebecca Cunningham.

Research shows that the use of AI in the workplace has doubled in the last two years to 40%.

While AI continues to grow every day, both students and faculty are learning to use it and integrate it into their curriculum.

“These are tools, they are not a substitute for a human being. You still need the critical thinking, you need the ethical guidelines, even more so,” said Sheets.

Following the panel, UMD hosted a campus-wide celebration to mark the inauguration of Chancellor Charles Nies.

Click here to download the Northern News Now app or our Northern News Now First Alert weather app.



Source link

Continue Reading

AI Research

AI startup CEO who has hired several Meta engineers says: Reason AI researchers are leaving Meta is, as founder Mark Zuckerberg said, “Biggest risk is not taking …”

Published

on


Shawn Shen, co-founder and CEO of the AI startup Memories.ai, has stated that some researchers are leaving Facebook-parent Meta due to frequent company reorganisations and a desire to take on bigger risks. Shen, who left Meta himself last year, notes that constant changes in managers and goals can be frustrating for researchers, leading them to seek opportunities at other companies and startups. Shen’s startup, which builds AI to understand visual data, recently announced a plan to offer up to $2 million compensation packages to researchers from top tech companies. Memories.ai has already hired Chi-Hao Wu, a former Meta research scientist, as its chief AI officer. Shen also referenced a statement from Meta CEO Mark Zuckerberg who earlier said that the “the biggest risk is not taking any risks.”

What startup CEO Shen said about AI researchers leaving Meta

In an interview with Business Insider, Shen said: “Meta is constantly doing reorganizations. Your manager and your goals can change every few months. For some researchers, it can be really frustrating and feel like a waste of time. So yes, I think that’s a driver for people to leave Meta and join other companies, especially startups.There’s other reasons people might leave. I think the biggest one is what Mark (Zuckerberg) has said: ‘In an age that’s evolving so fast, the biggest risk is not taking any risks. So why not do that and potentially change the world as part of a trillion-dollar company?’We have already hired Eddy Wu, our Chief AI Officer who was my manager’s manager at Meta. He’s making a similar amount to what we’re offering the new people. He was on their generative AI team, which is now Meta Superintelligence Labs. And we are already talking to a few other people from MSL and some others from Google DeepMind.”

What Shen said about hiring Meta AI researchers for his startup

Shen noted that he’s offering AI researchers who are leaving Meta pay packages of $2 million to work with his startup. He said: “It’s because of the talent war that was started by Mark Zuckerberg. I used to work at Meta, and I speak with my former colleagues often about this. When I heard about their compensation packages, I was shocked — it’s really in the tens of millions range. But it shows that in this age, AI researchers who make the best models and stand at the frontier of technology are really worth this amount of money. We’re building an AI model that can see and remember just like humans. The things that we are working on are very niche. So we are looking for people who are really, really good at the whole field of understanding video data.”He even explained that his company is prioritising hires who are willing to take more equity than cash, allowing it to preserve its financial runway. These recruits will be treated as founding members rather than employees, with compensation split between cash and equity depending on the individual, Shen added.Over the next six months, the AI startup is planning to add three to five people, followed by another five to ten within a year, alongside efforts to raise additional funding. Shen believes that investing heavily in talent will strengthen, not hinder, future fundraising.

Can Lyne Originals Coolpods 11 Solve Your Problems?





Source link

Continue Reading

Trending