Connect with us

AI Research

Lessons Learned From CDW’s AI Research Report

Published

on


 

1. Solve Problems Instead of Deploying New Tools for Their Own Sake

Organizations may feel pressured to try something just because of hype. You should resist the urge. Instead, be clear about a problem that needs to be solved and how AI would fit in.

There may be some easy deployments to start with through capabilities or features that exist in solutions that your organization is already using, such as within a productivity software suite or an electronic health records system.

Another problem area could be any repetitive administrative tasks that would benefit from automation. One of the reasons that ambient listening tools have held consistent interest is because organizations want to reduce clinician burden and mitigate burnout. How can health systems reduce “pajama time” for clinicians so that they can repair patient relationships?

READ MORE: Take advantage of data and artificial intelligence for better healthcare outcomes.

2. Amid Regulatory Uncertainty, Have a Solid AI Governance Structure

As algorithms improve and regulatory responses remain in flux, healthcare organizations need to have agility and stability in their own AI governance structure. And with requirements that can vary state by state, a multidisciplinary approach is crucial to keep up with changes.

Create the proper work groups with the right representation of stakeholders to ask the right questions around potential use cases, the end-user experience, recognizing and mitigating risk, ethical concerns, algorithmic bias, compliance, and data quality.

Infrastructure considerations also need to be factored in. How ready is your organization to adopt more AI solutions? Do your teams have the right skill sets? Have you secured your environment? Are there any on-premises considerations versus workloads that should move to the cloud? Organizations will need to build out landing zones and may have different strategies when it comes to how they are using their compute and storage.

EXPLORE: How should healthcare organizations navigate artificial intelligence evaluation and implementation?

3. Keep Data Security and Privacy at the Forefront

Data governance goes hand in hand with AI governance, as most AI-powered solutions require high-quality data, which is table stakes at this point. This also requires strategies around how to protect that data.

There is also a need to have more transparency in some of the solutions that are out there so organizations can adequately assess whether a solution is going to meet regulatory requirements. Transparency is key, as real danger exists if an AI solution gets a prediction wrong or poor data is used. A one-size-fits-all approach to AI in healthcare is just not possible, and there will likely still be a need for human discernment or a human in the loop to ensure outcomes are not causing harm.

This article is part of HealthTech’s MonITor blog series.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

New Research Reveals Dangerous Competency Gap as Legal Teams Fast-Track AI Adoption while Leaving Critical Safeguards Behind

Published

on


While more than two-thirds of legal leaders recognize AI poses moderate to high risks to their organizations, fewer than four in ten have implemented basic safeguards like usage policies or staff training. Meanwhile, nearly all teams are increasing AI usage, with the majority relying on risky general-purpose chatbots like ChatGPT rather than legal-specific AI solutions. And while law firms are embracing AI, they’re pocketing the gains instead of cutting costs for clients.

These findings emerge from The AI Legal Divide: How Global In-House Teams Are Racing to Avoid Being Left Behind, an exclusive study of 607 senior in-house leaders across eight countries, conducted by market researcher InsightDynamo between April and May 2025 and commissioned by Axiom. The study also reveals that U.S. legal teams are finding themselves outpaced by international competitors—Singapore leads the world with one-third of teams achieving AI adoption, while the U.S. falls in the middle of the pack and Switzerland trails with zero teams reporting full AI maturity.

Among the most striking findings:

  • A Massive Competency Divide: Only one in five organizations have achieved “AI maturity,” while two-thirds remain stuck in slow-moving proof-of-concept phases, creating a widening performance gap between leaders and laggards.
  • Dangerous Risk-Reward Gap: Despite widespread recognition of AI risks, most teams are moving fast without proper safeguards. More than half have implemented basic protections like usage policies or staff training.
  • Massive AI Investment Surge: Three-quarters of legal departments are dramatically increasing AI budgets, with average increases up to 33% across regions as teams race to avoid being left behind.
  • Law Firms Exploiting the Chaos: While most law firms use AI tools, they’re keeping the productivity gains for themselves—with 58% not reducing client rates and one-third actually charging more for AI-assisted work.
  • Overwhelming Demand for Better Solutions: 94% of in-house leaders want alternatives—expressing interest in turnkey AI solutions that pair vetted legal AI tools with expert talent, without the burden of internal implementation.

“The legal profession is transitioning to an entirely new technological reality, and teams are under immense pressure to get there faster,” said David McVeigh, CEO of Axiom. “What’s troubling is that most in-house teams are going it alone—they’re not AI experts, they’re mostly using risky general-purpose chatbots, and their law firms are capitalizing on AI without sharing the benefits. This creates both opportunity and urgency for legal departments to find better alternatives.”

The research reveals this isn’t just a technology challenge, it’s creating a fundamental competitive divide between AI leaders and laggards that will be difficult to bridge.

“Legal leaders face a catch-22,” said C.J. Saretto, Chief Technology Officer at Axiom. “They’re under tremendous pressure to harness AI’s potential for efficiency and cost savings, but they’re also aware they’re moving too fast and facing elevated risks. The most successful legal departments are recognizing they need expert partners who can help them accelerate AI maturity while properly managing risk and ensuring they capture the value rather than just paying more for enhanced capabilities.”

Axiom’s full AI maturity study is available at https://www.axiomlaw.com/resources/articles/2025-legal-ai-report. For more information or to talk to an Axiom representative, visit https://www.axiomlaw.com. For more information about Axiom, please visit our website, hear from our experts on the Inside Axiom blog, network with us on LinkedIn, and subscribe to our YouTube channel.

Related Axiom News

About InsightDynamo

InsightDynamo is a high-touch, full-service, flexible market research and business consulting firm that delivers custom intelligence programs tailored to your industry, culture, and one-of-a-kind challenges. Learn more (literally) at https://insightdynamo.com.

About Axiom

Axiom invented the alternative legal services industry 25 years ago and now serves more than 3,500 legal departments globally, including 75% of the Fortune 100, who place their trust in Axiom, with 95% client satisfaction. Axiom gives small, mid-market, and enterprise clients a single trusted provider who can deliver a full spectrum of legal solutions and services across more than a dozen practice areas and all major industries at rates up to 50% less than national law firms. To learn how Axiom can help your legal departments do more for less, visit axiomlaw.com.

SOURCE Axiom Global Inc.



Source link

Continue Reading

AI Research

Santos Dumont, LNCC supercomputer, receives fourfold upgrade as the first step in the Brazilian Artificial Intelligence Plan

Published

on

By


The upgraded supercomputer, built by Eviden and based on leading technologies from NVIDIA, Intel and AMD, is the step towards transforming it into one of the largest supercomputer in the world

Brazil – July 9, 2025

Built by Eviden (Atos Group), a technology leader for sustainable advanced computing and AI infrastructures, and integrating NVIDIA Enterprise technology, a pioneer in accelerated computing and artificial intelligence, this upgrade of the supercomputer is part of the Federal Government’s first investment step towards the Brazilian Artificial Intelligence Plan. The Brazilian Artificial Intelligence Plan (PBIA) 2024-2028, launched during the 5th National Conference on Science, Technology and Innovation, has a planned investment of R$23 billion over four years to transform Brazil into a world reference in innovation and efficiency in the use of AI.

For more information, please click here.



Source link

Continue Reading

AI Research

Our most capable open models for health AI development

Published

on


Healthcare is increasingly embracing AI to improve workflow management, patient communication, and diagnostic and treatment support. It’s critical that these AI-based systems are not only high-performing, but also efficient and privacy-preserving. It’s with these considerations in mind that we built and recently released Health AI Developer Foundations (HAI-DEF). HAI-DEF is a collection of lightweight open models designed to offer developers robust starting points for their own health research and application development. Because HAI-DEF models are open, developers retain full control over privacy, infrastructure and modifications to the models. In May of this year, we expanded the HAI-DEF collection with MedGemma, a collection of generative models based on Gemma 3 that are designed to accelerate healthcare and lifesciences AI development.

Today, we’re proud to announce two new models in this collection. The first is MedGemma 27B Multimodal, which complements the previously-released 4B Multimodal and 27B text-only models by adding support for complex multimodal and longitudinal electronic health record interpretation. The second new model is MedSigLIP, a lightweight image and text encoder for classification, search, and related tasks. MedSigLIP is based on the same image encoder that powers the 4B and 27B MedGemma models.

MedGemma and MedSigLIP are strong starting points for medical research and product development. MedGemma is useful for medical text or imaging tasks that require generating free text, like report generation or visual question answering. MedSigLIP is recommended for imaging tasks that involve structured outputs like classification or retrieval. All of the above models can be run on a single GPU, and MedGemma 4B and MedSigLIP can even be adapted to run on mobile hardware.

Full details of MedGemma and MedSigLIP development and evaluation can be found in the MedGemma technical report.



Source link

Continue Reading

Trending