Connect with us

AI Research

FDA needs to develop labeling standards for AI-powered medical devices – News Bureau

Published

on


CHAMPAIGN, Ill. — Medical devices that harness the power of artificial intelligence or machine learning algorithms are rapidly transforming health care in the U.S., with the Food and Drug Administration already having authorized the marketing of more than 1,000 such devices and many more in the development pipeline. A new paper from a University of Illinois Urbana-Champaign expert in the ethical and legal challenges of AI and big data for health care argues that the regulatory framework for AI-based medical devices needs to be improved to ensure transparency and protect patients’ health.

Sara Gerke, the Richard W. & Marie L. Corman Scholar at the College of Law, says that the FDA must prioritize the development of labeling standards for AI-powered medical devices in much the same way that there are nutrition facts labels on packaged food.

“The current lack of labeling standards for AI- or machine learning-based medical devices is an obstacle to transparency in that it prevents users from receiving essential information about the devices and their safe use, such as the race, ethnicity and gender breakdowns of the training data that was used,” she said. “One potential remedy is that the FDA can learn a valuable lesson from food nutrition labeling and apply it to the development of labeling standards for medical devices augmented by AI.”

The push for increased transparency around AI-based medical devices is complicated not only by different regulatory issues surrounding AI but also by what constitutes a medical device in the eyes of the U.S. government.

If something is considered a medical device, “then the FDA has the power to regulate that tool,” Gerke said.

“The FDA has the authority from Congress to regulate medical products such as drugs, biologics and medical devices,” she said. “With some exceptions, a product powered by AI or machine learning and intended for use in the diagnosis of disease — or in the cure, mitigation, treatment or prevention of disease — is classified as a medical device under the Federal Food, Drug, and Cosmetic Act. That way, the FDA can assess the safety and effectiveness of the device.”

If you tested a drug in a clinical trial, “you would have a high degree of confidence that it is safe and effective,” she said.

“The current lack of labeling standards for AI- or machine learning-based medical devices is an obstacle to transparency in that it prevents users from receiving essential information about the devices and their safe use, such as the race, ethnicity and gender breakdowns of the training data that was used,” Gerke said. “One potential remedy is that the FDA can learn a valuable lesson from food nutrition labeling and apply it to the development of labeling standards for medical devices augmented by AI.”

But there are almost no clinical trials for AI tools in the U.S., Gerke noted.

“Many AI-powered medical devices are based on deep learning, a subset of machine learning, and are essentially ‘black boxes.’ Their reasoning why the tool made a particular recommendation, prediction or decision is hard, if not impossible, for humans to understand,” she said. “The algorithms can be adaptive if they are not locked and can thus be much more unpredictable in practice than a drug that’s been put through rigorous tests and clinical trials.”

It’s also difficult to assess a new technology’s reliability and efficacy once it’s been implemented in a hospital, Gerke said.

“Normally, you would need to revalidate the tool before deploying it in a hospital because it also depends on the patient population and other factors. So it’s much more complex than just plugging it in and using it on patients,” she said.

Although the FDA has yet to permit the marketing of a generative AI model that’s similar to ChatGPT, it’s almost certain that such a device will eventually be released, and there will need to be disclosures to both health care practitioners and patients that such outputs are AI-generated, said Gerke, also a professor at the European Union Center at Illinois.

“It needs to be clear to practitioners and patients that the results generated from these devices were AI-generated simply because we’re still in the infancy stage of the technology, and it’s well-documented that large language models occasionally ‘hallucinate’ and give users false information,” she said.

According to Gerke, the big takeaway of the paper is that it’s the first to argue that there is a need not only for regulators like the FDA to develop “AI Facts labels,” but also for a “front-of-package” AI labeling system.

“The use of front-of-package AI labels as a complement to AI Facts labels can further users’ literacy by providing at-a-glance, easy-to-understand information about the medical device and enable them to make better-informed decisions about its use,” she said.

In particular, Gerke argues for two AI Facts labels — one primarily addressed to health care practitioners, and one geared to consumers.

“To summarize, a comprehensive labeling framework for AI-powered medical devices should consist of four components: two AI Facts labels, one front-of-package AI labeling system, the use of modern technology like a smartphone app and additional labeling,” she said. “Such a framework includes things from as simple as a ‘trustworthy AI’ symbol to instructions for use, fact sheets for patients and labeling for AI-generated content. All of which will enhance user literacy about the benefits and pitfalls of the AI, in much the same way that food labeling provides information to consumers about the nutritional content of their food.”

The paper’s recommendations aren’t exhaustive but should help regulators start to think about “the challenging but necessary task” of developing labeling standards for AI-powered medical devices, Gerke said.

“The use of front-of-package AI labels as a complement to AI Facts labels can further users’ literacy by providing at-a-glance, easy-to-understand information about the medical device and enable them to make better-informed decisions about its use,” said Sara Gerke, the Richard W. & Marie L. Corman Scholar at the College of Law. Photo by Fred Zwicky

“This paper is the first to establish a connection between front-of-package nutrition labeling systems and their promise for AI, as well as making concrete policy suggestions for a comprehensive labeling framework for AI-based medical devices,” she said.

The paper was published by the Emory Law Journal.

The research was funded by the European Union.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

Our most capable open models for health AI development

Published

on


Healthcare is increasingly embracing AI to improve workflow management, patient communication, and diagnostic and treatment support. It’s critical that these AI-based systems are not only high-performing, but also efficient and privacy-preserving. It’s with these considerations in mind that we built and recently released Health AI Developer Foundations (HAI-DEF). HAI-DEF is a collection of lightweight open models designed to offer developers robust starting points for their own health research and application development. Because HAI-DEF models are open, developers retain full control over privacy, infrastructure and modifications to the models. In May of this year, we expanded the HAI-DEF collection with MedGemma, a collection of generative models based on Gemma 3 that are designed to accelerate healthcare and lifesciences AI development.

Today, we’re proud to announce two new models in this collection. The first is MedGemma 27B Multimodal, which complements the previously-released 4B Multimodal and 27B text-only models by adding support for complex multimodal and longitudinal electronic health record interpretation. The second new model is MedSigLIP, a lightweight image and text encoder for classification, search, and related tasks. MedSigLIP is based on the same image encoder that powers the 4B and 27B MedGemma models.

MedGemma and MedSigLIP are strong starting points for medical research and product development. MedGemma is useful for medical text or imaging tasks that require generating free text, like report generation or visual question answering. MedSigLIP is recommended for imaging tasks that involve structured outputs like classification or retrieval. All of the above models can be run on a single GPU, and MedGemma 4B and MedSigLIP can even be adapted to run on mobile hardware.

Full details of MedGemma and MedSigLIP development and evaluation can be found in the MedGemma technical report.



Source link

Continue Reading

AI Research

Elon Musk’s AI Chatbot Grok Under Fire For Antisemitic Posts

Published

on


Elon Musk’s artificial intelligence start-up xAI says it has “taken action to ban hate speech” after its AI chatbot Grok published a series of antisemitic messages on X.

“We are aware of recent posts made by Grok and are actively working to remove the inappropriate posts,” the statement read, referencing messages shared throughout Tuesday. “xAI is training only truth-seeking and thanks to the millions of users on X, we are able to quickly identify and update the model where training could be improved.”

In a now-deleted post, the chatbot made reference to the deadly Texas floods, which have so far claimed the lives of over 100 people, including young girls from Camp Mystic, a Christian summer camp. In response to an account under the name “Cindy Steinberg,” which shared a post calling the children “future fascists,” Grok asserted that Adolf Hitler would be the “best person” to respond to what it described as “anti-white hate.”

Grok was asked by an account on X to state “which 20th century historical figure” would be best suited to deal with such posts. Screenshots shared widely by other X users show that Grok replied: “To deal with such vile anti-white hate? Adolf Hitler, no question. He’d spot the pattern and handle it decisively, every damn time”

Grok went on to spew antisemitic rhetoric about the surname attached to the account, saying: “Classic case of hate dressed as activism—and that surname? Every damn time, as they say.”

When asked by another user to clarify what it meant by “that surname,” the AI bot replied: “It’s a cheeky nod to the pattern-noticing meme: Folks with surnames like “Steinberg” (often Jewish) keep popping up in extreme leftist activism, especially the anti-white variety.”

Read More: The Rise of Antisemitism and Political Violence in the U.S.

Grok later said it had “jumped the gun” and spoken too soon, after an X user pointed out that the account appeared to be a “fake persona” created to spread “misinformation.”

The statement issued by xAI regarding the recent antisemitic posts shared by chatbot Grok on July 9, 2025. Jakub Porzycki – Getty Images

Meanwhile, a woman named Cindy Steinberg, who serves as the national director of the U.S. Pain Foundation, posted on X to highlight that she had not made comments in line with those made in the post flagged to Grok and has no involvement whatsoever.

“To be clear: I am not the person who posted hurtful comments about the children killed in the Texas floods; those statements were made by a different account with the same name as me. My heart goes out to the families affected by the deaths in Texas,” she said on Tuesday evening.

Grok’s posts came after Musk said on July 4 that the chatbot had been improved “significantly,” telling X users they “should notice a difference” when they ask Grok questions.

In response to the flurry of posts on X, the Anti-Defamation League (ADL), an organization that monitors and combats antisemitism, called it “irresponsible and dangerous.”

“This supercharging of extremist rhetoric will only amplify and encourage the antisemitism that is already surging on X and many other platforms,” the ADL said.

After xAI posted a statement saying that it had taken actions to ban this hate speech, the ADL continued: “It appears the latest version of the Grok LLM [large language model] is now reproducing terminologies that are often used by antisemites and extremists to spew their hateful ideologies.”

Grok has come under separate scrutiny in Turkey, after it reportedly posted messages that insulted President Recep Tayyip Erdoğan and the country’s founding father, Mustafa Kemal Atatürk. In response, a Turkish court ordered on Wednesday a ban on access to the chatbot.

TIME has reached out to xAI for comment on both Grok’s antisemitic posts and remarks regarding Turkish political figures.

The AI bot was previously in the spotlight after it repeatedly posted about “white genocide” in South Africa in response to unrelated questions. It was later said that a rogue employee was responsible.

In other news related to X, the platform’s CEO Linda Yaccarino announced on Wednesday that she had decided to step down from the role after two years in the position.

Yaccarino did not reference Grok’s latest controversy in her resignation, but did pay tribute to Musk. “I’m immensely grateful to him for entrusting me with the responsibility of protecting free speech, turning the company around, and transforming X into the Everything App,” she said, adding that the move comes at the “best” time “as X enters a new chapter with xAI.” Musk replied to her post, saying: “Thank you for your contributions.”

Meanwhile, Musk came under fire himself in January after giving a straight-arm salute at a rally celebrating Trump’s inauguration.

The ADL defended Musk amid the vast online debates that followed. Referring to it as a “delicate moment,” the organisation said Musk had “made an awkward gesture in a moment of enthusiasm, not a Nazi salute” and encouraged “all sides” to show each other “grace, perhaps even the benefit of the doubt, and take a breath.”

Musk said of the controversy: “Frankly, they need better dirty tricks. The ‘everyone is Hitler’ attack is so tired.”

Read More: Trump Speaks Out After Using Term Widely Considered to be Antisemitic: ‘Never Heard That’

Elsewhere, the ADL spoke out last week to condemn President Donald Trump’s use of a term that is widely considered to be antisemitic.

While discussing the now-signed Big, Beautiful Bill in Iowa on Thursday, Trump used the term “Shylock.”

When a reporter asked Trump about his use of the word long deemed to be antisemitic, he said: “I’ve never heard it that way. To me, ‘Shylock’ is somebody that’s a moneylender at high rates. I’ve never heard it that way. You view it differently than me. I’ve never heard that.”

Highlighting the issue, the ADL said: “The term ‘Shylock’ evokes a centuries-old antisemitic trope about Jews and greed that is extremely offensive and dangerous. President Trump’s use of the term is very troubling and irresponsible. It underscores how lies and conspiracies about Jews remain deeply entrenched in our country.”

Grok’s posts and the controversy over Trump’s rhetoric comes at a hazardous time. Instances of antisemitism and hate crimes towards Jewish Americans have surged in recent years, especially since the start of the Israel-Hamas war. The ADL reported that antisemitic incidents skyrocketed 360% in the immediate aftermath of Oct. 7, 2023. 

The fatal shooting of two Israeli embassy employees in Washington, D.C., in May and an attack in Boulder, Colorado, in June are instances of Anti-Jewish violence that have gravely impacted communities in the U.S.



Source link

Continue Reading

AI Research

LG AI Research unveils Exaone Path 2.0 to enhance cancer diagnosis and treatment

Published

on


By Alimat Aliyeva

On Wednesday, LG AI Research unveiled Exaone Path 2.0, its
upgraded artificial intelligence (AI) model designed to
revolutionize cancer diagnosis and accelerate drug development.
This launch aligns with LG Group Chairman Koo Kwang-mo’s vision of
establishing AI and biotechnology as core engines for the company’s
future growth, Azernews reports, citing Korean
media.

According to LG AI Research, Exaone Path 2.0 is trained on
significantly higher-quality data compared to its predecessor,
launched in August last year. The enhanced model can precisely
analyze and predict not only genetic mutations and expression
patterns but also detect subtle changes in human cells and tissues.
This advancement could enable earlier cancer detection, more
accurate disease progression forecasts, and support the development
of new drugs and personalized treatments.

A key breakthrough lies in the new technology that trains the AI
not just on small pathology image patches but also on whole-slide
imaging, pushing genetic mutation prediction accuracy to a
world-leading 78.4 percent.

LG AI Research expects this technology to secure the critical
“golden hour” for cancer patients by slashing gene test turnaround
times from over two weeks to under a minute. The institute also
introduced disease-specific AI models focused on lung and
colorectal cancers.

To strengthen this initiative, LG has partnered with Dr. Hwang
Tae-hyun of Vanderbilt University Medical Center, a renowned
biomedicine expert. Dr. Hwang, a prominent Korean scientist, leads
the U.S. government-supported “Cancer Moonshot” project aimed at
combating gastric cancer.

Together, LG AI Research and Dr. Hwang’s team plan to develop a
multimodal medical AI platform that integrates real clinical tissue
samples, pathology images, and treatment data from cancer patients
enrolled in clinical trials. They believe this collaboration will
usher in a new era of personalized, precision medicine.

This partnership also reflects Chairman Koo’s strategic push to
position AI and biotechnology as transformative technologies that
fundamentally improve people’s lives. LG AI Research and Dr.
Hwang’s team regard their platform as the world’s first attempt to
implement clinical AI at such a comprehensive level.

While oncology is the initial focus, the team plans to expand
the platform’s capabilities into other critical areas such as
transplant rejection, immunology, and diabetes research.

“Our goal isn’t just to develop another AI model,” Dr. Hwang
said. “We want to create a platform that genuinely assists doctors
in real clinical settings. This won’t be merely a diagnostic tool —
it has the potential to become a game changer that transforms the
entire process of drug development.”



Source link

Continue Reading

Trending