Connect with us

AI Research

FDA needs to develop labeling standards for AI-powered medical devices – News Bureau

Published

on


CHAMPAIGN, Ill. — Medical devices that harness the power of artificial intelligence or machine learning algorithms are rapidly transforming health care in the U.S., with the Food and Drug Administration already having authorized the marketing of more than 1,000 such devices and many more in the development pipeline. A new paper from a University of Illinois Urbana-Champaign expert in the ethical and legal challenges of AI and big data for health care argues that the regulatory framework for AI-based medical devices needs to be improved to ensure transparency and protect patients’ health.

Sara Gerke, the Richard W. & Marie L. Corman Scholar at the College of Law, says that the FDA must prioritize the development of labeling standards for AI-powered medical devices in much the same way that there are nutrition facts labels on packaged food.

“The current lack of labeling standards for AI- or machine learning-based medical devices is an obstacle to transparency in that it prevents users from receiving essential information about the devices and their safe use, such as the race, ethnicity and gender breakdowns of the training data that was used,” she said. “One potential remedy is that the FDA can learn a valuable lesson from food nutrition labeling and apply it to the development of labeling standards for medical devices augmented by AI.”

The push for increased transparency around AI-based medical devices is complicated not only by different regulatory issues surrounding AI but also by what constitutes a medical device in the eyes of the U.S. government.

If something is considered a medical device, “then the FDA has the power to regulate that tool,” Gerke said.

“The FDA has the authority from Congress to regulate medical products such as drugs, biologics and medical devices,” she said. “With some exceptions, a product powered by AI or machine learning and intended for use in the diagnosis of disease — or in the cure, mitigation, treatment or prevention of disease — is classified as a medical device under the Federal Food, Drug, and Cosmetic Act. That way, the FDA can assess the safety and effectiveness of the device.”

If you tested a drug in a clinical trial, “you would have a high degree of confidence that it is safe and effective,” she said.

“The current lack of labeling standards for AI- or machine learning-based medical devices is an obstacle to transparency in that it prevents users from receiving essential information about the devices and their safe use, such as the race, ethnicity and gender breakdowns of the training data that was used,” Gerke said. “One potential remedy is that the FDA can learn a valuable lesson from food nutrition labeling and apply it to the development of labeling standards for medical devices augmented by AI.”

But there are almost no clinical trials for AI tools in the U.S., Gerke noted.

“Many AI-powered medical devices are based on deep learning, a subset of machine learning, and are essentially ‘black boxes.’ Their reasoning why the tool made a particular recommendation, prediction or decision is hard, if not impossible, for humans to understand,” she said. “The algorithms can be adaptive if they are not locked and can thus be much more unpredictable in practice than a drug that’s been put through rigorous tests and clinical trials.”

It’s also difficult to assess a new technology’s reliability and efficacy once it’s been implemented in a hospital, Gerke said.

“Normally, you would need to revalidate the tool before deploying it in a hospital because it also depends on the patient population and other factors. So it’s much more complex than just plugging it in and using it on patients,” she said.

Although the FDA has yet to permit the marketing of a generative AI model that’s similar to ChatGPT, it’s almost certain that such a device will eventually be released, and there will need to be disclosures to both health care practitioners and patients that such outputs are AI-generated, said Gerke, also a professor at the European Union Center at Illinois.

“It needs to be clear to practitioners and patients that the results generated from these devices were AI-generated simply because we’re still in the infancy stage of the technology, and it’s well-documented that large language models occasionally ‘hallucinate’ and give users false information,” she said.

According to Gerke, the big takeaway of the paper is that it’s the first to argue that there is a need not only for regulators like the FDA to develop “AI Facts labels,” but also for a “front-of-package” AI labeling system.

“The use of front-of-package AI labels as a complement to AI Facts labels can further users’ literacy by providing at-a-glance, easy-to-understand information about the medical device and enable them to make better-informed decisions about its use,” she said.

In particular, Gerke argues for two AI Facts labels — one primarily addressed to health care practitioners, and one geared to consumers.

“To summarize, a comprehensive labeling framework for AI-powered medical devices should consist of four components: two AI Facts labels, one front-of-package AI labeling system, the use of modern technology like a smartphone app and additional labeling,” she said. “Such a framework includes things from as simple as a ‘trustworthy AI’ symbol to instructions for use, fact sheets for patients and labeling for AI-generated content. All of which will enhance user literacy about the benefits and pitfalls of the AI, in much the same way that food labeling provides information to consumers about the nutritional content of their food.”

The paper’s recommendations aren’t exhaustive but should help regulators start to think about “the challenging but necessary task” of developing labeling standards for AI-powered medical devices, Gerke said.

“The use of front-of-package AI labels as a complement to AI Facts labels can further users’ literacy by providing at-a-glance, easy-to-understand information about the medical device and enable them to make better-informed decisions about its use,” said Sara Gerke, the Richard W. & Marie L. Corman Scholar at the College of Law. Photo by Fred Zwicky

“This paper is the first to establish a connection between front-of-package nutrition labeling systems and their promise for AI, as well as making concrete policy suggestions for a comprehensive labeling framework for AI-based medical devices,” she said.

The paper was published by the Emory Law Journal.

The research was funded by the European Union.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

E-research library with AI tools to assist lawyers | Delhi News

Published

on


New Delhi: In an attempt to integrate legal work in courts with artificial intelligence, Bar Council of Delhi (BCD) has opened a one-of-its-kind e-research library at the Rouse Avenue courts. Inaugurated on July 5 by law minister Kapil Mishra, the library has various software to assist lawyers in their legal work. With initial funding of Rs 20 lakh, BCD functionaries told TOI that they are also planning the expansion of the library to be accessed from anywhere.Named after former BCD chairman BS Sherawat, the library boasts an integrated system, including the legal research platform SCC Online, the legal research online database Manupatra, and an AI platform, Lucio, along with several e-books on law across 15 desktops.Advocate Neeraj, president of Central Delhi Bar Court Association, told TOI, “The vision behind this initiative is to help law practitioners in their research. Lawyers are the officers of the honourable court who assist the judicial officer to reach a verdict in cases. This library will help lawyers in their legal work. Keeping that in mind, considering a request by our association, BCD provided us with funds and resources.”The library, which runs from 9:30 am to 5:30 pm, aims to develop a mechanism with the help of the evolution of technology to allow access from anywhere in the country. “We are thinking along those lines too. It will be good if a lawyer needs some research on some law point and can access the AI tools from anywhere; she will be able to upgrade herself immediately to assist the court and present her case more efficiently,” added Neeraj.Staffed with one technical person and a superintendent, the facility will incur around Rs 1 lakh per month to remain functional.With pendency in Delhi district courts now running over 15.3 lakh cases, AI tools can help law practitioners as well as the courts. Advocate Vikas Tripathi, vice-president of Central Delhi Court Bar Association, said, “Imagine AI tools which can give you relevant references, cite related judgments, and even prepare a case if provided with proper inputs. The AI tools have immense potential.”In July 2024, ‘Adalat AI’ was inaugurated in Delhi’s district courts. This AI-driven speech recognition software is designed to assist court stenographers in transcribing witness examinations and orders dictated by judges to applications designed to streamline workflow. This tool automates many processes. A judicial officer has to log in, press a few buttons, and speak out their observations, which are automatically transcribed, including the legal language. The order is automatically prepared.The then Delhi High Court Chief Justice, now SC Judge Manmohan, said, “The biggest problem I see judges facing is that there is a large demand for stenographers, but there’s not a large pool available. I think this app will solve that problem to a large extent. It will ensure that a large pool of stenographers will become available for other purposes.” At present, the application is being used in at least eight states, including Kerala, Karnataka, Andhra Pradesh, Delhi, Bihar, Odisha, Haryana and Punjab.





Source link

Continue Reading

AI Research

Enterprises will strengthen networks to take on AI, survey finds

Published

on


  • Private data centers: 29.5%
  • Traditional public cloud: 35.4%
  • GPU as a service specialists: 18.5%
  • Edge compute: 16.6%

“There is little variation from training to inference, but the general pattern is workloads are concentrated a bit in traditional public cloud and then hyperscalers have significant presence in private data centers,” McGillicuddy explained. “There is emerging interest around deploying AI workloads at the corporate edge and edge compute environments as well, which allows them to have workloads residing closer to edge data in the enterprise, which helps them combat latency issues and things like that. The big key takeaway here is that the typical enterprise is going to need to make sure that its data center network is ready to support AI workloads.”

AI networking challenges

The popularity of AI doesn’t remove some of the business and technical concerns that the technology brings to enterprise leaders.

According to the EMA survey, business concerns include security risk (39%), cost/budget (33%), rapid technology evolution (33%), and networking team skills gaps (29%). Respondents also indicated several concerns around both data center networking issues and WAN issues. Concerns related to data center networking included:

  • Integration between AI network and legacy networks: 43%
  • Bandwidth demand: 41%
  • Coordinating traffic flows of synchronized AI workloads: 38%
  • Latency: 36%

WAN issues respondents shared included:

  • Complexity of workload distribution across sites: 42%
  • Latency between workloads and data at WAN edge: 39%
  • Complexity of traffic prioritization: 36%
  • Network congestion: 33%

“It’s really not cheap to make your network AI ready,” McGillicuddy stated. “You might need to invest in a lot of new switches and you might need to upgrade your WAN or switch vendors. You might need to make some changes to your underlay around what kind of connectivity your AI traffic is going over.”

Enterprise leaders intend to invest in infrastructure to support their AI workloads and strategies. According to EMA, planned infrastructure investments include high-speed Ethernet (800 GbE) for 75% of respondents, hyperconverged infrastructure for 56% of those polled, and SmartNICs/DPUs for 45% of surveyed network professionals.



Source link

Continue Reading

AI Research

Amazon Web Services builds heat exchanger to cool Nvidia GPUs for AI

Published

on


The letters AI, which stands for “artificial intelligence,” stand at the Amazon Web Services booth at the Hannover Messe industrial trade fair in Hannover, Germany, on March 31, 2025.

Julian Stratenschulte | Picture Alliance | Getty Images

Amazon said Wednesday that its cloud division has developed hardware to cool down next-generation Nvidia graphics processing units that are used for artificial intelligence workloads.

Nvidia’s GPUs, which have powered the generative AI boom, require massive amounts of energy. That means companies using the processors need additional equipment to cool them down.

Amazon considered erecting data centers that could accommodate widespread liquid cooling to make the most of these power-hungry Nvidia GPUs. But that process would have taken too long, and commercially available equipment wouldn’t have worked, Dave Brown, vice president of compute and machine learning services at Amazon Web Services, said in a video posted to YouTube.

“They would take up too much data center floor space or increase water usage substantially,” Brown said. “And while some of these solutions could work for lower volumes at other providers, they simply wouldn’t be enough liquid-cooling capacity to support our scale.”

Rather, Amazon engineers conceived of the In-Row Heat Exchanger, or IRHX, that can be plugged into existing and new data centers. More traditional air cooling was sufficient for previous generations of Nvidia chips.

Customers can now access the AWS service as computing instances that go by the name P6e, Brown wrote in a blog post. The new systems accompany Nvidia’s design for dense computing power. Nvidia’s GB200 NVL72 packs a single rack with 72 Nvidia Blackwell GPUs that are wired together to train and run large AI models.

Computing clusters based on Nvidia’s GB200 NVL72 have previously been available through Microsoft or CoreWeave. AWS is the world’s largest supplier of cloud infrastructure.

Amazon has rolled out its own infrastructure hardware in the past. The company has custom chips for general-purpose computing and for AI, and designed its own storage servers and networking routers. In running homegrown hardware, Amazon depends less on third-party suppliers, which can benefit the company’s bottom line. In the first quarter, AWS delivered the widest operating margin since at least 2014, and the unit is responsible for most of Amazon’s net income.

Microsoft, the second largest cloud provider, has followed Amazon’s lead and made strides in chip development. In 2023, the company designed its own systems called Sidekicks to cool the Maia AI chips it developed.

WATCH: AWS announces latest CPU chip, will deliver record networking speed



Source link

Continue Reading

Trending