Connect with us

AI Research

Clinician-Powered AI Solutions Set a New Standard: Black Book Survey Reveals Six Vendors Earning Near-Perfect User Scores

Published

on


While healthcare AI adoption is frequently hindered by poor design, weak integration, and limited clinician involvement, six standout vendors have defied these industry-wide trends, according to a new Black Book Research survey. Conducted among 448 clinicians in Q2 2025, the study highlights how direct clinician involvement in AI design and development has delivered measurable, immediate clinical impact, with nearly 100% clinician satisfaction.

Qualitative KPIs to Measure Clinician Input and AI Usability

The following eight qualitative key performance indicators (KPIs) have been exclusively designed to measure how clinicians’ insights directly shape the usability and effectiveness of AI solutions in healthcare:

Clinical-Powered AI Solution KPI

How Clinician Input is Measured AI Vendors

How Impact and Usability of AI Solutions are Evaluated

Workflow Fit

Clinicians provide narrative feedback on integration into daily routines and suggest workflow improvements.

Clinician testimonials confirm seamless AI integration, highlighting improved workflow efficiency.

User Satisfaction

Open-ended feedback captures clinicians’ daily experience, usability issues, and cognitive load challenges.

Users share qualitative accounts of enhanced ease-of-use and reduced cognitive burden.

Adoption & Sustained Use

Clinicians reflect on training effectiveness, onboarding experiences, and potential barriers encountered.

Users describe sustained routine use, indicating AI’s long-term integration into clinical practices.

Clinical Impact

Clinicians suggest impactful AI features and narrate perceived improvements in patient care.

Case studies and clinician narratives detail specific patient care enhancements attributed to AI implementation.

Time Savings

Clinicians detail their experience of administrative and repetitive tasks before and after AI implementation.

Qualitative feedback indicates substantial time savings, allowing more time for direct patient care.

Feedback Loop Responsiveness

Clinicians describe experiences submitting feedback and the responsiveness of AI developers.

Users provide accounts of meaningful and rapid incorporation of their suggestions into product updates.

Depth of Clinician Involvement

Clinicians share descriptions of their roles in AI product governance and collaborative design processes.

Narratives document clinician-led decision-making significantly shaping AI product evolution.

Transparency & Trust

Clinicians reflect on their understanding of AI-generated recommendations and related educational efforts.

Users offer testimonials describing high levels of trust in transparent and explainable AI outputs.

The following vendors ranked highest among over 200 evaluated, with near-perfect cumulative scores on these KPIs:

Epic Systems (EHR & Clinical AI) – 9.97/10

Epic Systems demonstrates exceptional clinician-powered design through its “Physician Builder” program, embedding frontline clinicians into every product stage. Clinicians rate Epic highly for Workflow Fit, User Satisfaction, Adoption & Sustained Use, and Time Savings, noting dramatically reduced documentation burdens and enhanced patient interactions.

Signal 1 (Predictive Analytics & Clinical Decision Support) – 9.95/10

Signal 1 excels in predictive analytics and clinical decision support by deeply involving hospital-based clinical teams in solution development. Clinicians praised Signal 1’s Clinical Impact, User Satisfaction, Transparency, and Workflow Fit, highlighting its seamless integration into care routines and meaningful impact on patient outcomes.

Aidoc (Imaging AI & Radiology) – 9.93/10

Aidoc distinguishes itself through continuous development alongside practicing radiologists, ensuring optimal workflow alignment and usability. With exceptional ratings for Clinical Impact, Workflow Fit, and User Satisfaction, radiologists endorse Aidoc for providing precise, actionable diagnostic insights seamlessly integrated into daily practices.

Suki AI (Digital Clinical Assistant) – 9.91/10

Suki AI’s voice-enabled clinical assistant platform, designed through ongoing clinician collaboration, achieves high scores in User Satisfaction, Workflow Fit, and Feedback Loop Responsiveness. Clinicians highlight Suki’s intuitive usability, considerable time savings, and consistent responsiveness to real-world user feedback.

Notable (Workflow Automation & Ambient Documentation AI) – 9.90/10

Notable’s clinician-first approach actively incorporates frontline user feedback, resulting in high scores for Workflow Fit, Feedback Loop Responsiveness, Transparency & Trust, and Adoption & Sustained Use. Clinicians commend Notable’s seamless integration and significant reduction in administrative workload.

Viz.ai (Imaging & Acute Care Coordination AI) – 9.90/10

Viz.ai collaborates directly with neurologists, radiologists, and emergency physicians, delivering immediate and measurable clinical outcomes, notably rapid stroke interventions. Clinicians awarded Viz.ai top marks in Clinical Impact, Workflow Fit, Time Savings, and Transparency & Trust, emphasizing the solution’s actionable and timely insights.

__________

Clinician-Centric Design Drives True AI Success

“Healthcare AI’s greatest successes don’t arise from technical brilliance alone, they are rooted in deeply embedding clinicians within the design and deployment process,” notes Doug Brown, Black Book’s Founder. “These six standout vendors represent a new breed of AI innovation, demonstrating what’s possible when frontline clinical expertise guides technology. In a healthcare environment often overwhelmed by disconnected, frustrating technology, these companies offer a compelling vision of what clinician-powered AI can truly achieve immediate usability, rapid adoption, measurable clinical outcomes, and meaningful clinician satisfaction.”

About Black Book Research Black Book™ is the healthcare industry’s leading independent research and survey organization, trusted by healthcare providers, payers, and investors. Black Book maintains a vast database comprising over 3.3 million responses from healthcare IT users, clinicians, executives, and operational users across more than 10,000 healthcare programs, software solutions, managed services, outsourcing initiatives, consulting firms, start-ups, and capital equipment suppliers globally including input from 65 countries. Black Book’s rigorous research methodology provides deep, unbiased insights into healthcare technology and service performance, empowering informed strategic decision-making across the healthcare industry.

Media Contact:
research@blackbookmarketresearch.com
https://www.blackbookmarketresearch.com

Source: Black Book Research



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

Hungary Boosts AI Research with New Supercomputer at University of Szeged

Published

on





Hungary Boosts AI Research with New Supercomputer at University of Szeged – Hungarian Conservative
























A state-of-the-art supercomputer designed for artificial intelligence research has been installed at the University of Szeged, marking a major step in Hungary’s digital innovation strategy and boosting its global presence in AI development.

Hungary has taken a major step toward becoming a significant player in artificial intelligence (AI) research with the installation of a cutting-edge supercomputer at the University of Szeged (SZTE). The high-performance system, unveiled on Tuesday, is optimized specifically for AI-focused scientific work and brings the university into the ranks of elite global research institutions.

Government Commissioner for Artificial Intelligence László Palkovics hailed the investment as a transformative milestone. ‘This supercomputer is our entry ticket into a world typically beyond the reach of countries the size of Hungary,’ he said at the inauguration event.

The university has acquired 1.75 petaflops of computing capacity, with additional access to more powerful resources managed by Hewlett Packard Enterprise (HPE), which supplied the technology. According to Palkovics, Hungary has also partnered with Europe’s largest supercomputing centre in Jülich, Germany, to secure access to 10 petaflops of computing power, with the possibility of scaling further. Hungarian-owned computing infrastructure may also be hosted at the German site to provide access to its full capacity when needed.

In a further development, German tech company ParTec AG announced plans for a 3 billion euro investment to build a new data centre in Hungary, which is expected to eventually double the capacity of the Jülich facility. Energy security, political stability, and Hungary’s skilled labour pool were cited as key reasons for selecting the country as the site.

László Bódis, Deputy State Secretary for Innovation at the Ministry of Culture and Innovation, noted that the University of Szeged is a leader in developing Hungary’s innovation ecosystem, contributing both public funding and its own resources. He highlighted ongoing projects such as the Science Park, industrial academic partnerships with pharmaceutical firms, and nuclear waste treatment research.

Managing Director of HPE Hungary Tibor Szpisják emphasized that the supercomputer in Szeged employs the same top-tier technology used by the fastest systems in Europe and among the top three globally. He explained that the supercomputer’s performance can expand indefinitely as more services migrate onto the system.

During the event, SZTE and HPE signed a strategic agreement covering joint research and education initiatives. Szpisják expressed optimism that their shared laboratory will bring new products and services to market.

According to Director of IT Services at SZTE Csaba Fekete the system is tailored for solving complex AI challenges in fields such as medicine, genomics, language models, and transportation. The partnership also gives researchers scalable access to HPE’s cloud computing resources based on project needs.

The total cost of the IT investment was 1.2 billion forints (approx. 3 million euros), with operational costs estimated at 800 million forints over the next five years.


Related articles:

A state-of-the-art supercomputer designed for artificial intelligence research has been installed at the University of Szeged, marking a major step in Hungary’s digital innovation strategy and boosting its global presence in AI development.








Source link

Continue Reading

AI Research

VUMC’s Section of Surgical Sciences and LG forge collaboration on AI initiatives for medical needs – VUMC News

Published

on



VUMC’s Section of Surgical Sciences and LG forge collaboration on AI initiatives for medical needs  VUMC News



Source link

Continue Reading

AI Research

FDA needs to develop labeling standards for AI-powered medical devices – News Bureau

Published

on


CHAMPAIGN, Ill. — Medical devices that harness the power of artificial intelligence or machine learning algorithms are rapidly transforming health care in the U.S., with the Food and Drug Administration already having authorized the marketing of more than 1,000 such devices and many more in the development pipeline. A new paper from a University of Illinois Urbana-Champaign expert in the ethical and legal challenges of AI and big data for health care argues that the regulatory framework for AI-based medical devices needs to be improved to ensure transparency and protect patients’ health.

Sara Gerke, the Richard W. & Marie L. Corman Scholar at the College of Law, says that the FDA must prioritize the development of labeling standards for AI-powered medical devices in much the same way that there are nutrition facts labels on packaged food.

“The current lack of labeling standards for AI- or machine learning-based medical devices is an obstacle to transparency in that it prevents users from receiving essential information about the devices and their safe use, such as the race, ethnicity and gender breakdowns of the training data that was used,” she said. “One potential remedy is that the FDA can learn a valuable lesson from food nutrition labeling and apply it to the development of labeling standards for medical devices augmented by AI.”

The push for increased transparency around AI-based medical devices is complicated not only by different regulatory issues surrounding AI but also by what constitutes a medical device in the eyes of the U.S. government.

If something is considered a medical device, “then the FDA has the power to regulate that tool,” Gerke said.

“The FDA has the authority from Congress to regulate medical products such as drugs, biologics and medical devices,” she said. “With some exceptions, a product powered by AI or machine learning and intended for use in the diagnosis of disease — or in the cure, mitigation, treatment or prevention of disease — is classified as a medical device under the Federal Food, Drug, and Cosmetic Act. That way, the FDA can assess the safety and effectiveness of the device.”

If you tested a drug in a clinical trial, “you would have a high degree of confidence that it is safe and effective,” she said.

“The current lack of labeling standards for AI- or machine learning-based medical devices is an obstacle to transparency in that it prevents users from receiving essential information about the devices and their safe use, such as the race, ethnicity and gender breakdowns of the training data that was used,” Gerke said. “One potential remedy is that the FDA can learn a valuable lesson from food nutrition labeling and apply it to the development of labeling standards for medical devices augmented by AI.”

But there are almost no clinical trials for AI tools in the U.S., Gerke noted.

“Many AI-powered medical devices are based on deep learning, a subset of machine learning, and are essentially ‘black boxes.’ Their reasoning why the tool made a particular recommendation, prediction or decision is hard, if not impossible, for humans to understand,” she said. “The algorithms can be adaptive if they are not locked and can thus be much more unpredictable in practice than a drug that’s been put through rigorous tests and clinical trials.”

It’s also difficult to assess a new technology’s reliability and efficacy once it’s been implemented in a hospital, Gerke said.

“Normally, you would need to revalidate the tool before deploying it in a hospital because it also depends on the patient population and other factors. So it’s much more complex than just plugging it in and using it on patients,” she said.

Although the FDA has yet to permit the marketing of a generative AI model that’s similar to ChatGPT, it’s almost certain that such a device will eventually be released, and there will need to be disclosures to both health care practitioners and patients that such outputs are AI-generated, said Gerke, also a professor at the European Union Center at Illinois.

“It needs to be clear to practitioners and patients that the results generated from these devices were AI-generated simply because we’re still in the infancy stage of the technology, and it’s well-documented that large language models occasionally ‘hallucinate’ and give users false information,” she said.

According to Gerke, the big takeaway of the paper is that it’s the first to argue that there is a need not only for regulators like the FDA to develop “AI Facts labels,” but also for a “front-of-package” AI labeling system.

“The use of front-of-package AI labels as a complement to AI Facts labels can further users’ literacy by providing at-a-glance, easy-to-understand information about the medical device and enable them to make better-informed decisions about its use,” she said.

In particular, Gerke argues for two AI Facts labels — one primarily addressed to health care practitioners, and one geared to consumers.

“To summarize, a comprehensive labeling framework for AI-powered medical devices should consist of four components: two AI Facts labels, one front-of-package AI labeling system, the use of modern technology like a smartphone app and additional labeling,” she said. “Such a framework includes things from as simple as a ‘trustworthy AI’ symbol to instructions for use, fact sheets for patients and labeling for AI-generated content. All of which will enhance user literacy about the benefits and pitfalls of the AI, in much the same way that food labeling provides information to consumers about the nutritional content of their food.”

The paper’s recommendations aren’t exhaustive but should help regulators start to think about “the challenging but necessary task” of developing labeling standards for AI-powered medical devices, Gerke said.

“The use of front-of-package AI labels as a complement to AI Facts labels can further users’ literacy by providing at-a-glance, easy-to-understand information about the medical device and enable them to make better-informed decisions about its use,” said Sara Gerke, the Richard W. & Marie L. Corman Scholar at the College of Law. Photo by Fred Zwicky

“This paper is the first to establish a connection between front-of-package nutrition labeling systems and their promise for AI, as well as making concrete policy suggestions for a comprehensive labeling framework for AI-based medical devices,” she said.

The paper was published by the Emory Law Journal.

The research was funded by the European Union.



Source link

Continue Reading

Trending