AI Insights
Artificial Intelligence Legal Holds: Preserving Prompts & Outputs

You are your company’s in-house legal counsel. It’s 3 PM on a Friday (because of course it is), and you’ve just received notice of impending litigation. Your first thought? “Time to issue a legal hold.” Your second thought, as you watch your colleague casually chatting with Claude about contract drafting? “Oh no… what about all the AI stuff?”
Welcome to 2025, where your legal hold obligations just got an AI-powered upgrade you never signed up for. This isn’t just theoretical hand-wringing. Companies are already being held accountable for incomplete AI-related preservation, and the costs are real — both in terms of litigation exposure and the scramble to retrofit compliance systems that never anticipated chatbots.
The Plot Twist Nobody Saw Coming
Remember when legal holds meant telling people not to delete their emails? The foundational duty to preserve electronically stored information (ESI) when litigation is “reasonably anticipated” remains the cornerstone of legal hold obligations. However, generative AI’s emergence has significantly complicated this well-established framework. Courts are increasingly making clear that AI-generated content, including prompts and outputs, constitutes ESI subject to traditional preservation obligations.
Those were simpler times. Now, every prompt your team types into ChatGPT, every AI-generated marketing copy, and yes, even that time someone asks Perplexity for restaurant recommendations during a business trip — it’s all potentially discoverable ESI.
Or so say several recent court decisions:
- In the In re OpenAI, Inc. Copyright Infringement Litigation MDL (SDNY), Magistrate Judge Ona T. Wang ordered OpenAI to preserve and segregate all output log data that would otherwise be deleted (whether deletion would occur by user choice or to satisfy privacy laws). Judge Sidney H. Stein later denied OpenAI’s objection and left the order standing (now on appeal to the Second Circuit). This is the clearest signal yet that courts will prioritize litigation preservation over default deletion settings.
- In Tremblay v. OpenAI (N.D. Cal.), the district court issued a sweeping order requiring OpenAI “to preserve and segregate all output log data that would otherwise be deleted on a going forward basis.” The Tremblay court dropped a truth bomb on us: AI inputs — prompts — can be discoverable.
- And although not AI-specific, recent chat-spoliation rulings (e.g., Google’s chat auto-delete practices) show that judges expect parties to suspend auto-delete once litigation is reasonably anticipated. These cases serve as analogs for AI chat tools.
Your New Reality Check: What Actually Needs Preserving?
Let’s break down what’s now on your preservation radar:
The Obvious Stuff:
- Every prompt typed into AI tools (yes, even the embarrassing ones)
- All AI-generated outputs used for business purposes
- The metadata showing who, what, when, and which AI model
The Not-So-Obvious Stuff:
- Failed queries and abandoned outputs (they still count!)
- Conversations in AI-powered Slack bots and Teams integrations
- That “quick question” someone asked Claude about a competitor
The “Are You Kidding Me?” Stuff:
- Deleted conversations (spoiler alert: they’re often not really deleted)
- Personal AI accounts used for work purposes
- AI-assisted research that never made it into final documents
Of course, knowing what to preserve is only half the battle. The real challenge? Actually implementing AI-aware legal holds when your IT department is still figuring out how to monitor these tools, your employees are using personal accounts for work-related AI, and new AI integrations appear in your tech stack on a weekly basis.
Next week, we’ll dive into the practical playbook for AI preservation — including the compliance frameworks that actually work, the vendor questions you should be asking, and why your current legal hold software might be more helpful than you think (or more useless than you fear).
P.S. – Yes, this blog post was ideated, outlined, and brooded over with the assistance of AI. Yes, we preserved the prompts. Yes, we’re practicing what we preach. No, we’re not perfect at it yet either.
AI Insights
Albania appoints AI bot ‘minister’ to fight corruption in world first | Corruption News

Sceptics wonder whether ‘Diella’, depicted as a woman in traditional folk costume, will herself be ‘corrupted’.
Published On 12 Sep 2025
Albanian Prime Minister Edi Rama has put an artificial intelligence-generated “minister” in charge of tackling corruption in his new cabinet.
Diella, which means “sun” in Albanian, was appointed on Thursday, with the leader introducing her as a “member of the cabinet who is not present physically” who will ensure that “public tenders will be 100 percent free of corruption”.
Recommended Stories
list of 3 itemsend of list
The awarding of tenders has long been a source of corruption in the Balkan country of 2.8 million people, which aspires to join the European Union.
Corruption is a key factor in Albania’s bid to join the bloc.
Rama’s Socialist Party, which recently secured a fourth term in office, has said it can deliver EU membership for Albania in five years, with negotiations concluding by 2027.
Lawmakers will soon vote on Rama’s new cabinet, but it was unclear whether he would ask for a vote on Diella’s virtual post.
Legal experts say more work may be needed to establish the official status of Diella, who is depicted on screen as a woman in a traditional Albanian folk costume.
Gazmend Bardhi, parliamentary group leader of the Democrats, said he considered Diella’s ministerial status unconstitutional.
“[The] Prime Minister’s buffoonery cannot be turned into legal acts of the Albanian state,” Bardhi posted on Facebook.
The prime minister did not provide details of what human oversight there might be for Diella, or address risks that someone could manipulate the artificial intelligence bot.
Launched earlier this year as a virtual assistant on the e-Albania public service platform, Diella helped users navigate the site and get access to about one million digital documents.
So far, she has helped issue 36,600 digital documents and provided nearly 1,000 services through the platform, according to official figures.
Not everyone is convinced.
One Facebook user said, “Even Diella will be corrupted in Albania.”
Another said, “Stealing will continue and Diella will be blamed.”
AI Insights
Artificial Intelligence Stocks To Watch Now – September 11th – MarketBeat
AI Insights
OpenAI reaches new agreement with Microsoft to change its corporate structure

OpenAI has reached a new tentative agreement with Microsoft and said its nonprofit, which technically controls its business, will now be given a $100 billion equity stake in its for-profit corporation.
The maker of ChatGPT said it had reached a new nonbinding agreement with Microsoft, its longtime partner, “for the next phase of our partnership.”
The announcements on Thursday include a few details about these new arrangements. OpenAI’s proposed changes to its corporate structure have drawn the scrutiny of regulators, competitors and advocates concerned about the impacts of artificial intelligence.
OpenAI was founded as a nonprofit in 2015 and its nonprofit board has continued to control the for-profit subsidiary that now develops and sells its AI products. It’s not clear whether the $100 billion equity stake the nonprofit will get as part of this announcement represents a controlling stake in the business.
California Attorney General Rob Bonta said last week that his office was investigating OpenAI’s proposed restructuring of its finances and governance.
He and Delaware Attorney General Kathy Jennings also sent the company a letter expressing concerns about the safety of ChatGPT after meeting with OpenAI’s legal team earlier last week in Delaware, where OpenAI is incorporated.
“Together, we are particularly concerned with ensuring that the stated safety mission of OpenAI as a non-profit remains front and center,” Bonta said in a statement last week.
Microsoft invested its first $1 billion in OpenAI in 2019 and the two companies later formed an agreement that made Microsoft the exclusive provider of the computing power needed to build OpenAI’s technology. In turn, Microsoft heavily used the technology behind ChatGPT to enhance its own AI products.
The two companies announced on Jan. 21 that they were altering that agreement, enabling the smaller company to build its own computing capacity, “primarily for research and training of models.” That coincided with OpenAI’s announcements of a partnership with Oracle to build a massive new data center in Abilene, Texas.
But other parts of its agreements with Microsoft remained up in the air as the two companies appeared to veer further apart. Their Thursday joint statement said they were still “actively working to finalize contractual terms in a definitive agreement.” Both companies declined further comment.
OpenAI had given its nonprofit board of directors — whose members now include a former U.S. Treasury secretary — the responsibility of deciding when its AI systems have reached the point at which they “outperform humans at most economically valuable work,” a concept known as artificial general intelligence, or AGI.
Such an achievement, per its earlier agreements, would cut off Microsoft from the rights to commercialize such a system, since the terms “only apply to pre-AGI technology.”
OpenAI’s corporate structure and nonprofit mission are also the subject of a lawsuit brought by Elon Musk, who helped found the nonprofit research lab and provided initial funding. Musk’s suit seeks to stop OpenAI from taking control of the company away from its nonprofit and alleges it has betrayed its promise to develop AI for the benefit of humanity.
___
The Associated Press and OpenAI have a licensing and technology agreement that allows OpenAI access to part of AP’s text archives.
____
Associated Press coverage of philanthropy and non-profits receives support through the AP’s collaboration with The Conversation US, with funding from Lilly Endowment Inc. The AP is solely responsible for this content. For all of AP’s philanthropy coverage, visit https://apnews.com/hub/philanthropy.
-
Business2 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms1 month ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy2 months ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi