Connect with us

AI Research

What legal teams need to know

Published

on


A comprehensive overview of AI and law in this one-stop guide.

Highlights

  • AI usage among legal professionals has nearly doubled in the past year, with 26% now using GenAI at work
  • Professional-grade AI, trained on verified legal content, is essential for accuracy and security, unlike consumer-grade tools that can provide incorrect information
  • Ethical obligations, including human supervision to manage bias and factual inaccuracies, are paramount, with the ABA and state bars issuing guidance for lawyers using AI

As legal professionals’ use of artificial intelligence (AI) in their work is increasing, legal work is evolving. The Thomson Reuters Institute 2025 Generative AI in Professional Services Report found that 26% of professionals use generative AI (GenAI) at work, almost twice the 14% who used it in 2024. Law firm attorneys led the way with the highest percentage of usage, followed by in-house counsel and government lawyers.

Given this, legal experts expect this increasing use of AI will significantly change legal professionals’ roles. The 2025 Thomson Reuters Future of Professionals Report found that 80% of professionals think AI will have a high or transformational impact in their jobs over the next five years. These professionals are most excited about AI’s ability to free up more time, helping them work more efficiently and productively, and producing higher-quality work.

To remain competitive and take advantage of opportunities to grow, legal teams must adapt to these changes while maintaining their ethical obligations. This guide is a one-stop resource for attorneys in law firms and in-house counsel who are navigating this profound AI shift. It covers what AI is, how legal professionals are using it, and what legal teams should do next to ensure they’re benefiting from it as soon as possible.

Jump to ↓

What is AI in the legal context?


Timeline of AI in law


Core use cases of AI in legal practice


AI in the courtroom


Legal ethics and regulatory considerations


Data privacy and security


Perspectives from legal leaders


How can your legal team prepare?

2025 Future of Professionals Report

Survey of 2,275 professionals and C-level corporate executives from over 50 countries

View report ↗

Let’s start by defining terms commonly used when discussing AI and law:

  • AI is an umbrella term for technology that can simulate human abilities such as learning, reasoning, problem-solving, decision-making, and understanding language.
  • Machine learning (ML) is a type of AI that learns from patterns in data to make decisions and recommendations.
  • Generative AI (GenAI) is a type of AI that creates new text, images, audio, and video content in response to users’ prompts. Popular GenAI-based solutions include ChatGPT, CoPilot, and Gemini.
  • Natural language processing (NLP) uses ML to understand and generate human language. It’s key to GenAI, translation tools, and voice recognition.
  • Agentic AI can—often by using other forms of AI—use reason to plan and execute multi-step processes following predefined objectives under human oversight and control.
  • Consumer-grade, public tools, such as ChatGPT, train on vast amounts of information covering almost every subject found online—therefore some of the data they rely on is unverified or wrong, meaning those errors will get incorporated into their results.
  • Professional-grade AI, such as CoCounsel Legal, is built on curated, verified legal content containing up-to-date, reliable information. These solutions are also designed to handle tasks specific to legal practice, including legal research, document review, and contract analysis.

In all cases, it’s vital to remember that AI assists with legal reasoning but doesn’t replace it. It automates certain routine tasks and serves as jumpstarts for more complex work. Lawyers still always need to use their professional skills and judgment to decide how to use the information and insights AI provides.

Timeline of AI in law

Though AI in legal work has increased dramatically in recent years, the history of artificial intelligence and law goes back well more than a decade. Beginning in the early 2000s, e-discovery tools started using AI to search through documents, improved results by using AI capabilities like finding concepts, not just keywords. This is when companies such as Clio got their start. From about 2010 to 2018, AI started to be used for legal analytics. In 2010, Thomson Reuters Westlaw used AI and ML in legal research, as did LexisNexis seven years later.

The next milestone was using GenAI, increasing efficiency tenfold. New companies in the AI and law space, such as Harvey AI, emerged, and existing legal AI vendors created increasingly powerful tools.

Lawyers use AI in multiple ways, and these are among the most common use cases cited in the 2025 Generative AI Report:

Document review and analysis

Document review and analysis is the most-used capability, with AI completing in seconds tasks that usually take hours. AI can find the needle in the haystack among millions of pages and review a wide variety of documents, from case files to contracts.

Legal research

Research with professional-grade AI can conduct deep research using proprietary, trusted content.

 

Document summarization

Document summarization saves lawyers and staff substantial time. GenAI can find the information that’s most relevant to a specific case or project.

Brief or memo writing

Brief or memo writing is fast and thorough with legal AI tools. These help lawyers jumpstart the process, find citations and references, make the document consistent, and answer questions.

Contract drafting

Contract drafting AI tools find relevant documents to use as starting points, locate clauses from trusted sources, and incorporate preferred language.

Correspondence drafting

Correspondence drafting is a time-consuming part of any lawyer’s day. AI speeds up email and letter writing by suggesting phrasing options, summarizing documents, checking grammar, and automating parts of the process.

AI in the courtroom

In June of 2023, an attorney filed a brief written with the help of ChatGPT, citing legal cases supporting his client’s position. But as the judge discovered, six of those cases didn’t exist. ChatGPT had made them up.

In 2024, the Thomson Reuters Generative AI in Professional Services Report found that 31% of those working in courts expressed concern about using AI, making it the most common reaction among them, compared to 26% who felt hesitant. Also, only 15% of court respondents were excited about these technologies—the lowest percentage reported in any job segment in the survey.

Sentiment on the future of GenAI

Hesitant

35%

Hopeful

23%

Excited

21%

Concerned

16%

Fearful

2%%

None of these

2%%

Source: Thomson Reuters 2024 Generative AI in Professional Services

“Courts will likely face the issue of whether to admit evidence generated in whole or in part from GenAI or LLMs, and new standards for reliability and admissibility may develop for this type of evidence”

Rawia Ashraf

Head of Product, CoCounsel Transactional & GCOs

Despite the “ChatGPT lawyer headlines,” courts’ hesitation could simply signal that they are taking time to figure out where and how the technology fits into the modern court system.

“I have used ChatGPT and other AI programs, and it saves time and levels the playing field for certain tasks and professions, but [it’s] also a bit dangerous in a sense that if it gets censored, it will inevitably be biased,” said one US judge. “But for computing and translating data, I think it is amazing.”

AI provides many benefits for legal professionals, but it also can pose ethical challenges.

The data used to train AI can be biased because of historical attitudes that we now recognize as unfair, a limited geographical range, or faulty algorithms. Human supervision is important to manage risk, especially when working with general-purpose AI such as ChatGPT. AI, as addressed above, can also return factually incorrect information and even make things up.

In response to GenAI’s rapid emergence beginning in late 2022, with the launch of ChatGPT, in 2024, the American Bar Association (ABA) issued a formal opinion on the ethical obligations involved in lawyers’ use of GenAI. Many state and local bar associations have since also published recommendations or are planning to do so soon.

AI regulations are still evolving. The European Union was ahead of the pack, adopting the world’s first comprehensive set of rules on AI in June 2024. There’s nothing comparable on the federal level in the U.S. However, there are laws that affect certain aspects of AI usage, such as the California Privacy Rights Act and the federal Fair Credit Reporting Act.

Infographic

Infographic

Data and AI ethics principles. Promoting trustworthiness and integrity in AI development and deployment.

View infographic ↗

Data privacy and security

Legal professionals must prioritize data privacy and security to protect sensitive client information from increasing cyber threats. Professional AI solutions need strong security rules and strict security standards to keep clients’ trust and protect confidential data.

Thomson Reuters prioritizes data security, privacy, and compliance, maintaining a comprehensive information security management framework. For more information on their security commitment and compliance practices, you can visit the Thomson Reuters Trust Center.

Legal leaders have experienced both challenges and success with incorporating AI into their practice. Many are especially enthusiastic about how much time AI saves them.

Jarret Coleman, general counsel, Century Communities, says, “A task that would previously have taken an hour was completed in five minutes or less. Something that would’ve taken us a couple of weeks to do, now gets back to the business-side in a day or two. That’s huge.”

John Polson, Chairman and Managing Partner at Fisher Phillips, LLP, has had a similar experience, saying, “CoCounsel is truly revolutionary legal tech. Its power to increase our attorneys’ efficiency has already benefited our clients. And we have only scratched the surface of this incredible technology.”

Productivity gains improve work-life balance, but security and reliability are among the top concerns.

Scott Bailey, director of research and knowledge services, Eversheds Sutherland, echoes their enthusiasm: “The AI landscape is transformed with CoCounsel. The power of this technology, deployed in a product that is secure and reliable, is a huge leap forward.”​

Legal teams approaching AI tool adoption systematically are likely to do so more successfully:

AI readiness checklist

A thoughtful checklist can help you:

  • Identify use cases
  • Understand responsible AI use
  • Get your peers and leadership on board
  • Research and select tools

AI education and training resources

These include:

Thomson Reuters publishes scholarly articles on topics such as:

  • Segmenting handwritten and printed text in marked-up legal documents
  • The best techniques from prompting GenAI
  • Uncertainty quantification effectiveness in text classification, and
  • Making a computational attorney

AI evaluation criteria

When looking for a legal AI tool or vendor, consider:

  • Was the tool trained on a reliable legal database and not the open web?
    • If so, what are its legal sources?
  • Does it assist with the specific legal tasks you need to complete?
  • Will it connect with platforms you are already using?
  • How long has the vendor been in business and been working with AI?
  • Most important, does the solution keep your data private and secure at all times?

Cross-functional planning

Make sure all relevant teams are involved, including:

The time to begin using AI legal solutions is now. The lawyers and organizations that will wind up ahead embrace AI’s ability to amplify expertise, knowing it’s not a replacement for humans’ professional proficiency.

Teams that invest in education and thoughtful adoption of these new tools will move the profession forward and help apply AI’s full potential. Thomson Reuters is committed to partnering with legal professionals who want to become leaders in this new era.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

Who is Shawn Shen? The Cambridge alumnus and ex-Meta scientist offering $2M to poach AI researchers

Published

on


Shawn Shen, co-founder and Chief Executive Officer of the artificial intelligence (AI) startup Memories.ai, has made headlines for offering compensation packages worth up to $2 million to attract researchers from top technology companies. In a recent interview with Business Insider, Shen explained that many scientists are leaving Meta, the parent company of Facebook, due to constant reorganisations and shifting priorities.“Meta is constantly doing reorganizations. Your manager and your goals can change every few months. For some researchers, it can be really frustrating and feel like a waste of time,” Shen told Business Insider, adding that this is a key reason why researchers are seeking roles at startups. He also cited Meta Chief Executive Officer Mark Zuckerberg’s philosophy that “the biggest risk is not taking any risks” as a motivation for his own move into entrepreneurship.With Memories.ai, a company developing AI capable of understanding and remembering visual data, Shen is aiming to build a niche team of elite researchers. His company has already recruited Chi-Hao Wu, a former Meta research scientist, as Chief AI Officer, and is in talks with other researchers from Meta’s Superintelligence Lab as well as Google DeepMind.

From full scholarships to Cambridge classrooms

Shen’s academic journey is rooted in engineering, supported consistently by merit-based scholarships. He studied at Dulwich College from 2013 to 2016 on a full scholarship, completing his A-Level qualifications.He then pursued higher education at the University of Cambridge, where he was awarded full scholarships throughout. Shen earned a Bachelor of Arts (BA) in Engineering (2016–2019), followed by a Master of Engineering (MEng) at Trinity College (2019–2020). He later continued at Cambridge as a Meta PhD Fellow, completing his Doctor of Philosophy (PhD) in Engineering between 2020 and 2023.

Early career: Internships in finance and research

Alongside his academic pursuits, Shen gained early experience through internships and analyst roles in finance. He worked as a Quantitative Research Summer Analyst at Killik & Co in London (2017) and as an Investment Banking Summer Analyst at Morgan Stanley in Shanghai (2018).Shen also interned as a Research Scientist at the Computational and Biological Learning Lab at the University of Cambridge (2019), building the foundations for his transition into advanced AI research.

From Meta’s Reality Labs to academia

After completing his PhD, Shen joined Meta (Reality Labs Research) in Redmond, Washington, as a Research Scientist (2022–2024). His time at Meta exposed him to cutting-edge work in generative AI, but also to the frustrations of frequent corporate restructuring. This experience eventually drove him toward building his own company.In April 2024, Shen began his academic career as an Assistant Professor at the University of Bristol, before launching Memories.ai in October 2024.

Betting on talent with $2M offers

Explaining his company’s aggressive hiring packages, Shen told Business Insider: “It’s because of the talent war that was started by Mark Zuckerberg. I used to work at Meta, and I speak with my former colleagues often about this. When I heard about their compensation packages, I was shocked — it’s really in the tens of millions range. But it shows that in this age, AI researchers who make the best models and stand at the frontier of technology are really worth this amount of money.”Shen noted that Memories.ai is looking to recruit three to five researchers in the next six months, followed by up to ten more within a year. The company is prioritising individuals willing to take a mix of equity and cash, with Shen emphasising that these recruits would be treated as founding members rather than employees.By betting heavily on talent, Shen believes Memories.ai will be in a strong position to secure additional funding and establish itself in the competitive AI landscape.His bold $2 million offers may raise eyebrows, but they also underline a larger truth: in today’s technology race, the fiercest competition is not for customers or capital, it’s for talent.





Source link

Continue Reading

AI Research

The Energy Monster AI Is Creating

Published

on


We don’t really know how much energy artificial intelligence is consuming. There aren’t any laws currently on the books requiring AI companies to disclose their energy usage or environmental impact, and most firms therefore opt to keep that controversial information close to the vest. Plus, large language models are evolving all the time, increasing in both complexity and efficiency, complicating outside efforts to quantify the sector’s energy footprint. But while we don’t know exactly how much electricity data centers are eating up to power ever-increasing AI integration, we do know that it’s a whole lot. 

“AI’s integration into almost everything from customer service calls to algorithmic “bosses” to warfare is fueling enormous demand,” the Washington Post recently reported. “Despite dramatic efficiency improvements, pouring those gains back into bigger, hungrier models powered by fossil fuels will create the energy monster we imagine.”

And that energy monster is weighing heavily on the minds of policymakers around the world. Global leaders are busily wringing their hands over the potentially disastrous impact AI could have on energy security, especially in countries like Ireland, Saudi Arabia, and Malaysia, where planned data center development outpaces planned energy capacity. 

In a rush to keep ahead of a critical energy shortage, public and private entities involved on both the tech and energy sides of the issue have been rushing to increase energy production capacities by any means. Countries are in a rush to build new power plants as well as to keep existing energy projects online beyond their planned closure dates. Many of these projects are fossil fuel plants, causing outcry that indiscriminate integration of artificial intelligence is undermining the decarbonization goals of nations and tech firms the world over. 

“From the deserts of the United Arab Emirates to the outskirts of Ireland’s capital, the energy demands of AI applications and training running through these centres are driving the surge of investment into fossil fuels,” reports the Financial Times. Globally, more than 85 gas-powered facilities are currently being built to meet AI’s energy demand according to figures from Global Energy Monitor.

In the United States, the demand surge is leading to the resurrection of old coal plants. Coal has been in terminal decline for years now in the U.S., and a large number of defunct plants are scattered around the country with valuable infrastructure that could lend itself to a speedy new power plant hookup. Thanks to the AI revolution, many of these plants are now set to come back online as natural gas-fired plants. While gas is cleaner than coal, the coal-to-gas route may come at the expense of clean energy projects that could have otherwise used the infrastructure and coveted grid hookups of defunct coal-fired power plants. 

“Our grid isn’t short on opportunity — it’s short on time,” Carson Kearl, Enverus senior analyst for energy and AI, recently told Fortune. “These grid interconnections are up for grabs for new power projects when these coal plants roll off. The No. 1 priority for Big Tech has changed to [speed] to energy, and this is the fastest way to go in a lot of cases,” Kearl continued.

Last year, Google stated that the company’s carbon emissions had skyrocketed by a whopping 48 percent over the last five years thanks to its AI integration. “AI-powered services involve considerably more computer power – and so electricity – than standard online activity, prompting a series of warnings about the technology’s environmental impact,” the BBC reported last summer. Google had previously pledged to reach net zero greenhouse gas emissions by 2030, but the company now concedes that “as we further integrate AI into our products, reducing emissions may be challenging.”

By Haley Zaremba for Oilprice.com 

More Top Reads From Oilprice.com





Source link

Continue Reading

AI Research

JUPITER: Europe’s First Exascale Supercomputer Powers AI and Climate Research | Ukraine news

Published

on


The Jupiter supercomputer at the Jülich Research Centre, Germany, September 5, 2025.
Getty Images/INA FASSBENDER/AFP

As reported by the European Commission’s press service

At the Jülich Research Center in Germany, on September 5, the ceremonial opening of the supercomputer JUPITER took place – the first in Europe to surpass the exaflop performance threshold. The system is capable of performing more than one quintillion operations per second, according to the European Commission’s press service.

According to the EU, JUPITER runs entirely on renewable energy sources and features advanced cooling and heat disposal systems. It also topped the Green500 global energy-efficiency ranking.

The supercomputer is located on a site covering more than 2,300 square meters and comprises about 50 modular containers. It is currently the fourth-fastest supercomputer in the world.

JUPITER is capable of running high-resolution climate and meteorological models with kilometer-scale resolution, which allows more accurate forecasts of extreme events – from heat waves to floods.

Role in the European AI ecosystem and industrial developments

In addition, the system will form the backbone of the future European AI factory JAIF, which will train large language models and other generative technologies.

The investment in JUPITER amounts to about 500 million euros – a joint project of the EU and Germany under the EuroHPC programme. This is part of a broader strategy to build a network of AI gigafactories that will provide industry and science with the capabilities to develop new models and technologies.

It is expected that the deployment of JUPITER will strengthen European research-industrial initiatives and enhance the EU’s competitiveness on the global stage in the field of artificial intelligence and scientific developments.

More interesting materials:





Source link

Continue Reading

Trending