Connect with us

AI Insights

Artificial Intelligence Legal Holds: Preserving Prompts & Outputs

Published

on


You are your company’s in-house legal counsel. It’s 3 PM on a Friday (because of course it is), and you’ve just received notice of impending litigation. Your first thought? “Time to issue a legal hold.” Your second thought, as you watch your colleague casually chatting with Claude about contract drafting? “Oh no… what about all the AI stuff?”

Welcome to 2025, where your legal hold obligations just got an AI-powered upgrade you never signed up for. This isn’t just theoretical hand-wringing. Companies are already being held accountable for incomplete AI-related preservation, and the costs are real — both in terms of litigation exposure and the scramble to retrofit compliance systems that never anticipated chatbots.

The Plot Twist Nobody Saw Coming

Remember when legal holds meant telling people not to delete their emails? The foundational duty to preserve electronically stored information (ESI) when litigation is “reasonably anticipated” remains the cornerstone of legal hold obligations. However, generative AI’s emergence has significantly complicated this well-established framework. Courts are increasingly making clear that AI-generated content, including prompts and outputs, constitutes ESI subject to traditional preservation obligations.

Those were simpler times. Now, every prompt your team types into ChatGPT, every AI-generated marketing copy, and yes, even that time someone asks Perplexity for restaurant recommendations during a business trip — it’s all potentially discoverable ESI.

Or so say several recent court decisions:

  • In the In re OpenAI, Inc. Copyright Infringement Litigation MDL (SDNY), Magistrate Judge Ona T. Wang ordered OpenAI to preserve and segregate all output log data that would otherwise be deleted (whether deletion would occur by user choice or to satisfy privacy laws). Judge Sidney H. Stein later denied OpenAI’s objection and left the order standing (now on appeal to the Second Circuit). This is the clearest signal yet that courts will prioritize litigation preservation over default deletion settings.
  • In Tremblay v. OpenAI (N.D. Cal.), the district court issued a sweeping order requiring OpenAI “to preserve and segregate all output log data that would otherwise be deleted on a going forward basis.” The Tremblay court dropped a truth bomb on us: AI inputs — prompts — can be discoverable. 
  • And although not AI-specific, recent chat-spoliation rulings (e.g., Google’s chat auto-delete practices) show that judges expect parties to suspend auto-delete once litigation is reasonably anticipated. These cases serve as analogs for AI chat tools.

Your New Reality Check: What Actually Needs Preserving?

Let’s break down what’s now on your preservation radar:

The Obvious Stuff:

  • Every prompt typed into AI tools (yes, even the embarrassing ones)
  • All AI-generated outputs used for business purposes
  • The metadata showing who, what, when, and which AI model

The Not-So-Obvious Stuff:

  • Failed queries and abandoned outputs (they still count!)
  • Conversations in AI-powered Slack bots and Teams integrations
  • That “quick question” someone asked Claude about a competitor

The “Are You Kidding Me?” Stuff:

  • Deleted conversations (spoiler alert: they’re often not really deleted)
  • Personal AI accounts used for work purposes
  • AI-assisted research that never made it into final documents

Of course, knowing what to preserve is only half the battle. The real challenge? Actually implementing AI-aware legal holds when your IT department is still figuring out how to monitor these tools, your employees are using personal accounts for work-related AI, and new AI integrations appear in your tech stack on a weekly basis. 

Next week, we’ll dive into the practical playbook for AI preservation — including the compliance frameworks that actually work, the vendor questions you should be asking, and why your current legal hold software might be more helpful than you think (or more useless than you fear).

P.S. – Yes, this blog post was ideated, outlined, and brooded over with the assistance of AI. Yes, we preserved the prompts. Yes, we’re practicing what we preach. No, we’re not perfect at it yet either.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Insights

Mapping the power of AI across the patient journey

Published

on


Artificial intelligence (AI) is rapidly transforming clinical care, offering healthcare leaders new tools to improve workflows through automation and enhance patient outcomes with more accurate diagnoses and personalized treatments. This resource provides a framework for understanding how AI is applied across the patient journey, from pre-visit interactions to post‑visit monitoring and ongoing care. It focuses on actionable use cases to help healthcare organizations evaluate AI technologies holistically, balance innovation with feasibility, and navigate the evolving landscape of AI in healthcare.

For a deeper exploration of any specific use case featured in this infographic, check out our comprehensive compendium. It offers detailed insights into these technologies, including their benefits, implementation considerations, and evolving role in healthcare.



Source link

Continue Reading

AI Insights

Artificial intelligence (AI) for trusted computing and cyber security

Published

on


Summary points:

  • New software security system enhances protection for NVIDIA Jetson-powered embedded AI systems.
  • Secures AI models and sensitive data through encrypted APIs, process isolation, and secure OS features.
  • Includes anti-rollback protection, automatic OS recovery, and centralized web-based monitoring tools for edge devices.

TAIPEI, Taiwan – AAEON Technology in Taipei, Taiwan, is introducing a software security system for the company’s embedded artificial intelligence (AI) systems powered by NVIDIA Jetson system-on-modules.

The cyber security system is built on a three-tiered architecture with components that protect data at the edge and in the cloud. It is available as part of the board support package for SKUs of AAEON’s BOXER-8621AI, BOXER-8641AI-Plus, and the BOXER-8651AI-Plus Edge AI systems.

The most notable component of this trusted computing product is a trusted execution environment (TEE) named MAZU to protect AI models and application data by separating files, processes, and algorithms within protected execution zones.

Sensitive assets

Using MAZU enables users to isolate machine learning algorithms while running standard applications. It also gives access to sensitive assets through certified APIs with encrypted communications, secure OS, and certificate validation.

Other mechanisms include anti-restore protection, A/B redundancy partitioning, and disk lock to prevent hackers from reverting system software to previous versions. It reverts to a stable OS image if a device fails, and encrypts data storage on edge devices.

The package contains server-side management tools to manage edge devices at the server, including a web-based UI that monitors several edge systems from one server.

For more information contact AAEON online at https://www.aaeon.com/en/article/detail/software_security_framework.



Source link

Continue Reading

AI Insights

AI rollout in NHS hospitals faces major challenges

Published

on


Implementing artificial intelligence (AI) into NHS hospitals is far harder than initially anticipated, with complications around governance, contracts, data collection, harmonisation with old IT systems, finding the right AI tools and staff training, finds a major new UK study led by UCL researchers. 

Authors of the study, published in The Lancet eClinicalMedicine, say the findings should provide timely and useful learning for the UK Government, whose recent 10-year NHS plan identifies digital transformation, including AI, as a key platform to improving the service and patient experience. 

In 2023, NHS England launched a programme to introduce AI to help diagnose chest conditions, including lung cancer, across 66 NHS hospital trusts in England, backed by £21 million in funding. The trusts are grouped into 12 imaging diagnostic networks: these hospital networks mean more patients have access to specialist opinions. Key functions of these AI tools included prioritising critical cases for specialist review and supporting specialists’ decisions by highlighting abnormalities on scans.

Funded by the National Institute for Health and Care Research (NIHR), this research was conducted by a team from UCL, the Nuffield Trust, and the University of Cambridge, analysing how procurement and early deployment of the AI tools went. The study is one of the first studies to analyse real-world implementation of AI in healthcare.

Evidence from previous studies¹, mostly laboratory-based, suggested that AI might benefit diagnostic services by supporting decisions, improving detection accuracy, reducing errors and easing workforce burdens.

In this UCL-led study, the researchers reviewed how the new diagnostic tools were procured and set up through interviews with hospital staff and AI suppliers, identifying any pitfalls but also any factors that helped smooth the process.

They found that setting up the AI tools took longer than anticipated by the programme’s leadership. Contracting took between four and 10 months longer than anticipated and by June 2025, 18 months after contracting was meant to be completed, a third (23 out of 66) of the hospital trusts were not yet using the tools in clinical practice.

Key challenges included engaging clinical staff with already high workloads in the project, embedding the new technology in ageing and varied NHS IT systems across dozens of hospitals and a general lack of understanding, and scepticism, among staff about using AI in healthcare.

The study also identified important factors which helped embed AI including national programme leadership and local imaging networks sharing resources and expertise, high levels of commitment from hospital staff leading implementation, and dedicated project management.

The researchers concluded that while “AI tools may offer valuable support for diagnostic services, they may not address current healthcare service pressures as straightforwardly as policymakers may hope” and are recommending that NHS staff are trained in how AI can be used effectively and safely and that dedicated project management is used to implement schemes like this in the future.

First author Dr Angus Ramsay (UCL Department of Behavioural Science and Health) said: “In July ministers unveiled the Government’s 10-year plan for the NHS, of which a digital transformation is a key platform.

“Our study provides important lessons that should help strengthen future approaches to implementing AI in the NHS.

“We found it took longer to introduce the new AI tools in this programme than those leading the programme had expected.

“A key problem was that clinical staff were already very busy – finding time to go through the selection process was a challenge, as was supporting integration of AI with local IT systems and obtaining local governance approvals. Services that used dedicated project managers found their support very helpful in implementing changes, but only some services were able to do this.

“Also, a common issue was the novelty of AI, suggesting a need for more guidance and education on AI and its implementation.

“AI tools can offer valuable support for diagnostic services, but they may not address current healthcare service pressures as simply as policymakers may hope.”

The researchers conducted their evaluation between March and September last year, studying 10 of the participating networks and focusing in depth on six NHS trusts. They interviewed network teams, trust staff and AI suppliers, observed planning, governance and training and analysed relevant documents.

Some of the imaging networks and many of the hospital trusts within them were new to procuring and working with AI.

The problems involved in setting up the new tools varied – for example, in some cases those procuring the tools were overwhelmed by a huge amount of very technical information, increasing the likelihood of key details being missed. Consideration should be given to creating a national approved shortlist of potential suppliers to facilitate procurement at local level, the researchers said.

Another problem was initial lack of enthusiasm among some NHS staff for the new technology in this early phase, with some more senior clinical staff raising concerns about the potential impact of AI making decisions without clinical input and on where accountability lay in the event a condition was missed. The researchers found the training offered to staff did not address these issues sufficiently across the wider workforce – hence their call for early and ongoing training on future projects.

In contrast, however, the study team found the process of procurement was supported by advice from the national team and imaging networks learning from each other. The researchers also observed high levels of commitment and collaboration between local hospital teams (including clinicians and IT) working with AI supplier teams to progress implementation within hospitals.

In this project, each hospital selected AI tools for different reasons, such as focusing on X-ray or CT scanning, and purposes, such as to prioritise urgent cases for review or to identify potential symptoms.


The NHS is made up of hundreds of organisations with different clinical requirements and different IT systems and introducing any diagnostic tools that suit multiple hospitals is highly complex. These findings indicate AI might not be the silver bullet some have hoped for but the lessons from this study will help the NHS implement AI tools more effectively.”


Naomi Fulop, Senior Author, Professor UCL Department of Behavioural Science and Health

Limitations

While the study has added to the very limited body of evidence on the implementation and use of AI in real-world settings, it focused on procurement and early deployment. The researchers are now studying the use of AI tools following early deployment when they have had a chance to become more embedded. Further, the researchers did not interview patients and carers and are therefore now conducting such interviews to address important gaps in knowledge about patient experiences and perspectives, as well as considerations of equity.

Source:

Journal reference:

Ramsay, A. I. G., et al. (2025). Procurement and early deployment of artificial intelligence tools for chest diagnostics in NHS services in England: a rapid, mixed method evaluation. eClinicalMedicine. doi.org/10.1016/j.eclinm.2025.103481



Source link

Continue Reading

Trending