Connect with us

AI Research

Thomson Reuters Unveils First Agentic AI Legal Assistant CoCounsel

Published

on


CoCounsel Legal includes Deep Research, an industry-first AI solution grounded in Thomson Reuters expert legal content, starting with Westlaw

TORONTO, Aug. 5, 2025 /PRNewswire/ — Today, Thomson Reuters (TSX/Nasdaq: TRI), a global content and technology company, announced the launch of CoCounsel Legal, featuring Deep Research and agentic guided workflows. This milestone product release showcases Thomson Reuters most advanced AI offering to date, designed to help professionals move beyond prompting and start delegating.

Thomson Reuters is uniquely positioned to deliver this agentic AI breakthrough because it already has access to an AI assistant with advanced reasoning models, comprehensive legal content and tools, and expertise from thousands of domain experts—the three essential components required for reliable, professional-grade agentic AI.

CoCounsel Legal: Built for Execution, Not Just Ideas

CoCounsel Legal is a next-generation AI product that brings together legal research, essential workflow automation, intelligent document search and AI-powered legal assistance within one unified solution. It’s the first time agentic AI has been deployed this broadly across professional legal workflows—with the infrastructure and scale to support enterprise-wide transformation.

Unlike AI copilots that sit beside the work, CoCounsel Legal is embedded in it—built to drive real outcomes in litigation, transactional work, and regulatory analysis. It represents the foundation for how professional AI will be delivered across law firms and legal departments into the future.

Deep Research on Westlaw: AI That Thinks Like a Lawyer

CoCounsel Legal includes Deep Research, grounded in Thomson Reuters content and tools. Deep Research in CoCounsel is the legal industry’s first professional-grade agentic AI research capability—built to reason, plan, and deliver comprehensive legal research results grounded in Westlaw and Practical Law content. It enables legal professionals to hand off full research questions to an AI that not only understands the assignment — it explains its process, sources its answers, and builds the argument foundations, with human oversight.

Deep Research on the new Westlaw Advantage is integrated into CoCounsel Legal and can:

  • Generate multi-step research plans
  • Trace its logic with transparent reasoning
  • Deliver structured, Westlaw and Practical Law citation-backed reports

“Thomson Reuters latest integration of advanced AI models into its core platforms marks an encouraging step forward in legal technology,” said Colleen Nihill, Chief AI & KM Officer at Morgan Lewis. “Deep Research stands out for its ability to reason through legal questions rather than simply return search results. When faced with a complex issue, it can generate a research plan, explain its logic, and deliver a structured report. This level of transparency is essential to maintaining the oversight and trust lawyers need to confidently adopt AI in practice.”

Guided Workflows: Multi-Step AI That Actually Works

Also included in CoCounsel Legal is a growing library of guided workflows—multi-step task flows that apply agentic AI to high-friction legal work, leveraging Westlaw and Practical Law expertise. In addition to the CoCounsel guided workflows launched in July, there are new workflows launching soon:

  • Draft a Privacy Policy
  • Draft an Employee Policy
  • Draft a complaint
  • Draft discovery request
  • Draft discovery response
  • Deposition transcript review

These workflows embed legal content, apply structured reasoning, and provide human oversight by design, helping lawyers and corporate legal teams move faster while staying in control.

“This is where AI starts to feel less like a tool and more like a teammate,” said David Wong, Chief Product Officer at Thomson Reuters. “Guided workflows transform how professionals approach complex legal work, moving beyond simple prompting to sophisticated, multi-step task execution—and that’s a huge leap forward in what legal AI can deliver.”

Trusted by the Legal Market

Over 20,000 law firms and corporate legal departments and the majority of the top US courts and Am Law 100 firms already trust CoCounsel. With CoCounsel Legal, the company is setting a new benchmark for trusted, explainable, and production-ready AI in legal practice.

Additional Information

Thomson Reuters 

Thomson Reuters (TSX/Nasdaq: TRI) informs the way forward by bringing together the trusted content and technology that people and organizations need to make the right decisions. The company serves professionals across legal, tax, accounting, compliance, government, and media. Its products combine highly specialized software and insights to empower professionals with the data, intelligence, and solutions needed to make informed decisions, and to help institutions in their pursuit of justice, truth, and transparency. Reuters, part of Thomson Reuters, is a world-leading provider of trusted journalism and news. For more information, visit tr.com.

Contact: 
Ali Hughes
+1.763.326.4421
ali.hughes@thomsonreuters.com 

 

Cision View original content to download multimedia:https://www.prnewswire.com/news-releases/thomson-reuters-launches-cocounsel-legal-transforming-legal-work-with-agentic-ai-and-deep-research-302521761.html

SOURCE Thomson Reuters



Source link

AI Research

100x Faster Brain-Inspired AI Model

Published

on

By


In the rapidly evolving field of artificial intelligence, a new contender has emerged from China’s research labs, promising to reshape how we think about energy-efficient computing. The SpikingBrain-7B model, developed by the Brain-Inspired Computing Lab (BICLab) at the Chinese Academy of Sciences, represents a bold departure from traditional large language models. Drawing inspiration from the human brain’s neural firing patterns, this system employs spiking neural networks to achieve remarkable efficiency gains. Unlike conventional transformers that guzzle power, SpikingBrain-7B mimics biological neurons, firing only when necessary, which slashes energy consumption dramatically.

At its core, the model integrates hybrid-linear attention mechanisms and conversion-based training techniques, allowing it to run on domestic MetaX chips without relying on NVIDIA hardware. This innovation addresses a critical bottleneck in AI deployment: the high energy demands of training and inference. According to a technical report published on arXiv, the SpikingBrain series, including the 7B and 76B variants, demonstrates over 100 times faster first-token generation at long sequence lengths, making it ideal for edge devices in industrial control and mobile applications.

Breaking Away from Transformer Dominance

The genesis of SpikingBrain-7B can be traced to BICLab’s GitHub repository, where the open-source code reveals a sophisticated architecture blending spiking neurons with large-scale model training. Researchers at the lab, led by figures like Guoqi Li and Bo Xu, have optimized for non-NVIDIA clusters, overcoming challenges in parallel training and communication overhead. This approach not only enhances stability but also paves the way for neuromorphic hardware that prioritizes energy optimization over raw compute power.

Recent coverage in Xinhua News highlights how SpikingBrain-1.0, the foundational system, breaks from mainstream models like ChatGPT by using spiking networks instead of dense computations. This brain-inspired paradigm allows the model to train on just a fraction of the data typically required—reports suggest as little as 2%—while matching or exceeding transformer performance in benchmarks.

Efficiency Gains and Real-World Applications

Delving deeper, the model’s spiking mechanism enables asynchronous processing, akin to how the brain handles information dynamically. This is detailed in the arXiv report, which outlines a roadmap for next-generation hardware that could integrate seamlessly into sectors like healthcare and transportation. For instance, in robotics, SpikingBrain’s low-power profile supports real-time decision-making without the need for massive data centers.

Posts on X (formerly Twitter) from AI enthusiasts, such as those praising its 100x speedups, reflect growing excitement. Users have noted how the model’s hierarchical processing mirrors neuroscience findings, with emergent brain-like patterns in its structure. This sentiment aligns with broader neuromorphic computing trends, as seen in a Nature Communications Engineering article on advances in robotic vision, where spiking networks enable efficient AI in constrained environments.

Challenges and Future Prospects

Despite its promise, deploying SpikingBrain-7B isn’t without hurdles. The arXiv paper candidly discusses adaptations needed for CUDA and Triton operators in hybrid attention setups, underscoring the technical feats involved. Moreover, training on MetaX clusters required custom optimizations to handle long-sequence topologies, a feat that positions China at the forefront of independent AI innovation amid global chip restrictions.

In industry circles, this development is seen as a catalyst for shifting AI paradigms. A NotebookCheck report emphasizes its potential for up to 100x performance boosts over conventional systems, fueling discussions on sustainable AI. As neuromorphic computing gains traction, SpikingBrain-7B could inspire a wave of brain-mimicking models, reducing the environmental footprint of AI while expanding its reach to everyday devices.

Implications for Global AI Research

Beyond technical specs, the open-sourcing of SpikingBrain-7B via GitHub invites global collaboration, with the repository already garnering attention for its spike-driven transformer implementations. This mirrors earlier BICLab projects like Spike-Driven-Transformer-V2, building a continuum of research toward energy-efficient intelligence.

Looking ahead, experts anticipate integrations with emerging hardware, as outlined in PMC’s coverage of spike-based dynamic computing. With SpikingBrain’s bilingual capabilities and industry validations, it stands as a testament to how bio-inspired designs can democratize AI, challenging Western dominance and fostering a more inclusive technological future.



Source link

Continue Reading

AI Research

Exclusive | Cyberport may use Chinese GPUs at Hong Kong supercomputing hub to cut reliance on Nvidia

Published

on


Cyberport may add some graphics processing units (GPUs) made in China to its Artificial Intelligence Supercomputing Centre in Hong Kong, as the government-run incubator seeks to reduce its reliance on Nvidia chips amid worsening China-US relations, its chief executive said.

Cyberport has bought four GPUs made by four different mainland Chinese chipmakers and has been testing them at its AI lab to gauge which ones to adopt in the expanding facilities, Rocky Cheng Chung-ngam said in an interview with the Post on Friday. The park has been weighing the use of Chinese GPUs since it first began installing Nvidia chips last year, he said.

“At that time, China-US relations were already quite strained, so relying solely on [Nvidia] was no longer an option,” Cheng said. “That is why we felt that for any new procurement, we should in any case include some from the mainland.”

Cyberport’s AI supercomputing centre, established in December with its first phase offering 1,300 petaflops of computing power, will deliver another 1,700 petaflops by the end of this year, with all 3,000 petaflops currently relying on Nvidia’s H800 chips, he added.

Cyberport CEO Rocky Cheng Chung-ngam on September 12, 2025. Photo: Jonathan Wong

As all four Chinese solutions offer similar performance, Cyberport would take cost into account when determining which ones to order, according to Cheng, declining to name the suppliers.



Source link

Continue Reading

AI Research

Why do AI chatbots use so much energy?

Published

on


In recent years, ChatGPT has exploded in popularity, with nearly 200 million users pumping a total of over a billion prompts into the app every day. These prompts may seem to complete requests out of thin air.

But behind the scenes, artificial intelligence (AI) chatbots are using a massive amount of energy. In 2023, data centers, which are used to train and process AI, were responsible for 4.4% of electricity use in the United States. Across the world, these centers make up around 1.5% of global energy consumption. These numbers are expected to skyrocket, at least doubling by 2030 as the demand for AI grows.



Source link

Continue Reading

Trending