Connect with us

AI Insights

Google's Pixel 10 phones raises the ante on artificial intelligence – Meadville Tribune

Published

on

AI Insights

Artificial Intelligence Legal Holds: Preserving Prompts & Outputs

Published

on


You are your company’s in-house legal counsel. It’s 3 PM on a Friday (because of course it is), and you’ve just received notice of impending litigation. Your first thought? “Time to issue a legal hold.” Your second thought, as you watch your colleague casually chatting with Claude about contract drafting? “Oh no… what about all the AI stuff?”

Welcome to 2025, where your legal hold obligations just got an AI-powered upgrade you never signed up for. This isn’t just theoretical hand-wringing. Companies are already being held accountable for incomplete AI-related preservation, and the costs are real — both in terms of litigation exposure and the scramble to retrofit compliance systems that never anticipated chatbots.

The Plot Twist Nobody Saw Coming

Remember when legal holds meant telling people not to delete their emails? The foundational duty to preserve electronically stored information (ESI) when litigation is “reasonably anticipated” remains the cornerstone of legal hold obligations. However, generative AI’s emergence has significantly complicated this well-established framework. Courts are increasingly making clear that AI-generated content, including prompts and outputs, constitutes ESI subject to traditional preservation obligations.

Those were simpler times. Now, every prompt your team types into ChatGPT, every AI-generated marketing copy, and yes, even that time someone asks Perplexity for restaurant recommendations during a business trip — it’s all potentially discoverable ESI.

Or so say several recent court decisions:

  • In the In re OpenAI, Inc. Copyright Infringement Litigation MDL (SDNY), Magistrate Judge Ona T. Wang ordered OpenAI to preserve and segregate all output log data that would otherwise be deleted (whether deletion would occur by user choice or to satisfy privacy laws). Judge Sidney H. Stein later denied OpenAI’s objection and left the order standing (now on appeal to the Second Circuit). This is the clearest signal yet that courts will prioritize litigation preservation over default deletion settings.
  • In Tremblay v. OpenAI (N.D. Cal.), the district court issued a sweeping order requiring OpenAI “to preserve and segregate all output log data that would otherwise be deleted on a going forward basis.” The Tremblay court dropped a truth bomb on us: AI inputs — prompts — can be discoverable. 
  • And although not AI-specific, recent chat-spoliation rulings (e.g., Google’s chat auto-delete practices) show that judges expect parties to suspend auto-delete once litigation is reasonably anticipated. These cases serve as analogs for AI chat tools.

Your New Reality Check: What Actually Needs Preserving?

Let’s break down what’s now on your preservation radar:

The Obvious Stuff:

  • Every prompt typed into AI tools (yes, even the embarrassing ones)
  • All AI-generated outputs used for business purposes
  • The metadata showing who, what, when, and which AI model

The Not-So-Obvious Stuff:

  • Failed queries and abandoned outputs (they still count!)
  • Conversations in AI-powered Slack bots and Teams integrations
  • That “quick question” someone asked Claude about a competitor

The “Are You Kidding Me?” Stuff:

  • Deleted conversations (spoiler alert: they’re often not really deleted)
  • Personal AI accounts used for work purposes
  • AI-assisted research that never made it into final documents

Of course, knowing what to preserve is only half the battle. The real challenge? Actually implementing AI-aware legal holds when your IT department is still figuring out how to monitor these tools, your employees are using personal accounts for work-related AI, and new AI integrations appear in your tech stack on a weekly basis. 

Next week, we’ll dive into the practical playbook for AI preservation — including the compliance frameworks that actually work, the vendor questions you should be asking, and why your current legal hold software might be more helpful than you think (or more useless than you fear).

P.S. – Yes, this blog post was ideated, outlined, and brooded over with the assistance of AI. Yes, we preserved the prompts. Yes, we’re practicing what we preach. No, we’re not perfect at it yet either.



Source link

Continue Reading

AI Insights

AI firm Anthropic agrees to pay authors $1.5bn for pirating work

Published

on


Artificial intelligence (AI) firm Anthropic has agreed to pay $1.5bn (£1.11bn) to settle a class action lawsuit filed by authors who said the company stole their work to train its AI models.

The deal, which requires the approval of US District Judge William Alsup, would be the largest publicly-reported copyright recovery in history, according to lawyers for the authors.

It comes two months after Judge Alsup found that using books to train AI did not violate US copyright law, but ordered Anthropic to stand trial over its use of pirated material.

Anthropic said on Friday that the settlement would “resolve the plaintiffs’ remaining legacy claims.”

The settlement comes as other big tech companies including ChatGPT-maker OpenAI, Microsoft, and Instagram-parent Meta face lawsuits over similar alleged copyright violations.

Anthropic, with its Claude chatbot, has long pitched itself as the ethical alternative among its competitors.

“We remain committed to developing safe AI systems that help people and organisations extend their capabilities, advance scientific discovery, and solve complex problems,” said Aparna Sridhar, Deputy General Counsel at Anthropic which is backed by both Amazon and Google-parent Alphabet.

The lawsuit was filed against Anthropic last year by best-selling mystery thriller writer Andrea Bartz, whose novels include We Were Never Here, along with The Good Nurse author Charles Graeber and The Feather Thief author Kirk Wallace Johnson.

They accused the company of stealing their work to train its Claude AI chatbot in order to build a multi-billion dollar business.

The company holds more than seven million pirated books in a central library, according to Judge Alsup’s June decision, and faced up to $150,000 in damages per copyrighted work.

His ruling was among the first to weigh in on how Large Language Models (LLMs) can legitimately learn from existing material.

It found that Anthropic’s use of the authors’ books was “exceedingly transformative” and therefore allowed under US law.

But he rejected Anthropic’s request to dismiss the case.

Anthropic was set to stand trial in December over its use of pirated copies to build its library of material.

Plaintiffs lawyers called the settlement announced Friday “the first of its kind in the AI era.”

“It will provide meaningful compensation for each class work and sets a precedent requiring AI companies to pay copyright owners,” said lawyer Justin Nelson representing the authors. “This settlement sends a powerful message to AI companies and creators alike that taking copyrighted works from these pirate websites is wrong.”

The settlement could encourage more cooperation between AI developers and creators, according to Alex Yang, Professor of Management Science and Operations at London Business School.

“You need that fresh training data from human beings,” Mr Yang said. “If you want to grant more copyright to AI-created content, you must also strengthen mechanisms that compensate humans for their original contributions.”



Source link

Continue Reading

AI Insights

Duke University pilot project examining pros and cons of using artificial intelligence in college | National News

Published

on



























Duke University pilot project examining pros and cons of using artificial intelligence in college | National News | journalgazette.net


We recognize you are attempting to access this website from a country belonging to the European Economic Area (EEA) including the EU which
enforces the General Data Protection Regulation (GDPR) and therefore access cannot be granted at this time.

For any issues, contact jgnews@jg.net or call 260-461-8773.



Source link

Continue Reading

Trending