Connect with us

AI Research

Doctors Horrified After Google’s Healthcare AI Makes Up a Body Part That Does Not Exist in Humans

Published

on


Image by Getty / Futurism

Health practitioners are becoming increasingly uneasy about the medical community making widespread use of error-prone generative AI tools.

The proliferation of the tech has repeatedly been hampered by rampant “hallucinations,” a euphemistic term for the bots’ made-up facts and convincingly-told lies.

One glaring error proved so persuasive that it took over a year to be caught. In their May 2024 research paper introducing a healthcare AI model, dubbed Med-Gemini, Google researchers showed off the AI analyzing brain scans from the radiology lab for various conditions.

It identified an “old left basilar ganglia infarct,” referring to a purported part of the brain — “basilar ganglia” — that simply doesn’t exist in the human body. Board-certified neurologist Bryan Moore flagged the issue to The Verge, highlighting that Google fixed its blog post about the AI — but failed to revise the research paper itself.

The AI likely conflated the basal ganglia, an area of the brain that’s associated with motor movements and habit formation, and the basilar artery, a major blood vessel at the base of the brainstem. Google blamed the incident on a simple misspelling of “basal ganglia.”

It’s an embarrassing reveal that underlines persistent and impactful shortcomings of the tech. Even the latest “reasoning” AIs by the likes of Google and OpenAI are spreading falsehoods dreamed up by large language models that are trained on vast swathes of the internet.

In Google’s search results, this can lead to headaches for users during their research and fact-checking efforts.

But in a hospital setting, those kinds of slip-ups could have devastating consequences. While Google’s faux pas more than likely didn’t result in any danger to human patients, it sets a worrying precedent, experts argue.

“What you’re talking about is super dangerous,” healthcare system Providence’s chief medical information officer Maulin Shah told The Verge. “Two letters, but it’s a big deal.”

Google touted its healthcare AI as having a “substantial potential in medicine” last year, arguing it could be used to identify conditions in X-rays, CT scans, and more.

After Moore flagged the mistake in the company’s research paper to Google, employees told him it was a typo. In its updated blog post, Google noted that “‘basilar’ is a common mis-transcription of ‘basal’ that Med-Gemini has learned from the training data, though the meaning of the report is unchanged.”

Yet the research paper still erroneously refers to the “basilar ganglia” at the time of writing.

In a medical context, AI hallucinations could easily lead to confusion and potentially even put lives at risk.

“The problem with these typos or other hallucinations is I don’t trust our humans to review them, or certainly not at every level,” Shah told The Verge.

It’s not just Med-Gemini. Google’s more advanced healthcare model, dubbed MedGemma, also led to varying answers depending on the way questions were phrased, leading to errors some of the time.

“Their nature is that [they] tend to make up things, and it doesn’t say ‘I don’t know,’ which is a big, big problem for high-stakes domains like medicine,” Judy Gichoya, Emory University associate professor of radiology and informatics, told The Verge.

Other experts say we’re rushing into adapting AI in clinical settings — from AI therapists, radiologists, and nurses to patient interaction transcription services — warranting a far more careful approach.

In the meantime, it will be up to humans to continuously monitor the outputs of hallucinating AIs, which could counteractively lead to inefficiencies.

And Google is going full steam ahead. In March, Google revealed that its extremely error-prone AI Overviews search feature would start giving health advice. It also introduced an “AI co-scientist” who would assist human scientists in discovering new drugs, among other “superpowers.”

But if their outputs go unobserved and unverified, human lives could be at stake.

“In my mind, AI has to have a way higher bar of error than a human,” Shah told The Verge. “Maybe other people are like, ‘If we can get as high as a human, we’re good enough.’ I don’t buy that for a second.”

More on health AI: AI Therapist Goes Haywire, Urges User to Go on Killing Spree



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

Microsoft and OpenAI reach agreement on… something – Computerworld

Published

on


“OpenAI’s decision to recast its for-profit arm as a public benefit corporation while keeping control in the hands of its nonprofit parent is without precedent at this scale,” said Sanchit Vir Gogia, chief analyst and CEO at Greyhound Research. “It is a structure that fuses two competing logics: the relentless need to raise capital for models that cost billions to train, and the equally strong pressure from regulators, investors, and the public to demonstrate accountability.”

The proposed structure enables OpenAI to attract traditional investors who expect potential returns, while the nonprofit parent can ensure safety considerations aren’t sacrificed for profit. However, Gogia warned the hybrid model is “as much a gamble as it is an innovation,” noting concerns about “who carries liability if something goes wrong, whether fiduciary duty to investors will override social commitments when revenues are threatened.”

Charlie Dai, VP and principal analyst at Forrester, said the structure “could influence others because it balances capital access with mission-driven oversight” but warned that “regulatory scrutiny, lawsuits, and governance complexity introduce uncertainty around decision-making speed and long-term stability.”



Source link

Continue Reading

AI Research

Are British MPs using ChatGPT to write speeches for them?

Published

on


There’s a constant worry about AI being overused, particularly tools like ChatGPT being used to create “AI slop.” And now it seems that British MPs may be increasingly guilty of this.

An investigation of the parliamentary records by Pimlico Journal (via The Telegraph) shows a rise in the use of common AI phrasing by MPs in the speeches they’re delivering in Parliament.





Source link

Continue Reading

AI Research

How Artificial Intelligence Is Revolutionizing Workplace Ergonomics and Safety

Published

on


Ergonomic assessments can help organizations create safer work environments. The use of artificial intelligence in this field is poised to create even better results, if managed properly.

Workplace injuries continue to impose significant costs on businesses, with ergonomic-related injuries representing a substantial portion of workers’ compensation claims. As companies seek more efficient ways to protect their workforce, artificial intelligence is emerging as a transformative tool that can accelerate assessments while maintaining the human expertise that makes ergonomic programs effective.

“As an ergonomist, doing job task assessments took a long time. We saw AI as a tool that could help us speed up the back end, specifically the measuring and quantifying risk of awkward postures and forceful exertions,” said Michael White, Managing Director of Ergonomics at The Hartford.

“I think any ergonomist would agree that using these technologies now makes our lives way more efficient to screen for and visualize ergonomics-related exposures, and allows us to focus on what is important: helping customers or organizations improve work environments to make people healthier and less susceptible to injury.”

White’s team has integrated various AI tools into their assessment processes, discovering both significant opportunities and important limitations along the way.

The integration of artificial intelligence into ergonomic assessments varies significantly depending on the work environment. White explained that The Hartford takes different approaches for office versus industrial settings, with each presenting unique opportunities for AI implementation.

“At The Hartford, we’re working on different approaches for office ergonomics versus industrial ergonomics. We’ve focused more on the industrial space since that’s where the higher costs and exposures typically occur. However, for office environments, AI can be particularly effective,” White said.

Office environments offer more controlled conditions that make them ideal candidates for AI-powered assessments. Most office workstations feature adjustable chairs and standardized equipment, creating consistent parameters that AI systems can more easily evaluate.

“Office settings are more controllable compared to factories, warehouses, or distribution centers because most workstations have adjustable chairs and standardized equipment. We’re currently developing an AI-based office assessment tool with one of our vendors that captures images of the workspace,” White said.

This office assessment tool guides users through a series of questions to identify the positioning of keyboards, mice, monitors, and chair height. The AI then analyzes workspace photos to provide specific ergonomic recommendations, such as lowering monitors, raising chair height, or adjusting desk elevation.

“The goal is to create a hands-off approach where AI and machine learning can handle the assessment process independently, which is currently in development for office environments,” White said.

Carriers Crave Efficiency

For insurance companies like The Hartford, AI offers particular efficiency advantages. With hundreds of customers interested in ergonomic services and three to four ergonomists on staff, reaching every client through traditional on-site visits would be impossible.

Michael White, Managing Director of Ergonomics at The Hartford

“Virtual interactions have been a great way to be efficient, allowing us to reach as many people as possible. We can send a link to a customer, who then takes a couple of videos. These are sent back to our AI platform, enabling us to work with customers remotely without being on-site,” White said.

While AI offers significant efficiency gains, White emphasized that human oversight remains essential for effective ergonomic programs. Current AI technology cannot replace the critical thinking and problem-solving capabilities that experienced ergonomists bring to workplace assessments.

“You definitely cannot put 100% trust in AI at this point, and you need a second set of eyes or even a third set of eyes,” White said.

AI performs best in controlled, predictable environments where patterns repeat consistently. White noted that shipping and receiving areas represent ideal applications for AI-powered assessments because they involve repeatable processes that AI systems can learn to evaluate effectively.

“In more controlled environments, such as a shipping receiving area, AI can be useful. Most companies have a loading dock where products are boxed and loaded onto trucks for shipping—a very repeatable process. For scenarios like this, AI could be a valuable tool because it sees the same patterns repeatedly and can generate relevant solutions or recommendations,” White said.

However, in unpredictable or highly variable work environments, human expertise becomes even more critical. Professional ergonomists must review AI outputs and provide on-site or virtual guidance for complex situations that fall outside typical patterns.

Perhaps most importantly, AI cannot replicate the human connection that drives successful ergonomic programs. White emphasized that employee feedback and buy-in remain crucial elements that technology cannot replace.

“A good consultant always considers the person doing the job. In many cases, if I’m on-site, I will physically do the job that a person is doing to understand and relate to them,” White said. “We always seek to ask those folks doing the job for solutions because if you’re doing something day in and day out, you likely have ideas about what would make this easier on your body.”

“That’s where I see a gap with AI that may never be filled—that person-to-person interaction. I think this direct human connection is very critical in what we do, both in getting stakeholder buy-in to make workplace improvements and more simply relating to the average worker’s day-to-day,” White said.

Expanding Safety Impact Beyond Ergonomics

The computer vision technology that powers ergonomic assessments can identify a much broader range of workplace safety issues, making AI a valuable tool for comprehensive safety programs. Companies with existing CCTV systems can leverage this infrastructure to monitor various hazards beyond ergonomic risks.

“The AI tools we’re using, particularly computer vision tools, aren’t just specific to ergonomics; they tie into broader safety programs. If a company has CCTV cameras monitoring their shop floor or warehouse, which most do now, the technology can identify various safety issues beyond ergonomics,” White said.

These systems can detect forklift incidents, slip-trip-falls, and other dangerous behaviors that might otherwise go unnoticed until an accident occurs. White’s team has worked with companies whose AI systems identified employees performing unsafe behaviors such as doing donuts in forklifts, climbing on energized machinery, ducking under prohibited conveyor systems, and coming into contact with hot electrical components.

“There’s a wide range of safety elements that AI can help monitor and improve, with ergonomics being just one piece of that broader safety picture,” White said.

Wearable technology adds another layer of real-time safety coaching through haptic feedback systems. These devices can provide immediate alerts when workers assume potentially harmful postures.

“An interesting benefit we’ve seen with wearables, based on our own anecdotal evidence, is postural improvement through haptic feedback. When someone wears a sensor like a belt clip and bends too far, the device vibrates to alert them they’re bending improperly and should consider lifting differently,” White said.

The coaching capability of these devices creates autonomous safety guidance without requiring constant human supervision. White’s team has documented measurable improvements in lifting behaviors after implementing wearables with haptic feedback features.

“We’ve seen pre- and post-implementation results showing that when the belt clip vibrates, users don’t bend at the waist as far. It’s coaching them without having someone there telling them all the time. This autonomous coaching capability makes the technology particularly valuable,” White said.

The personalization capabilities of AI represent perhaps its greatest long-term potential for workplace safety. Rather than applying one-size-fits-all approaches, AI can identify individual workers who may be at higher risk and provide targeted interventions.

“I think it’s really just allowing more personalization. It goes away from the one-size-fits-all approach, which is still needed,” White said. “Traditional safety elements remain essential—everyone needs their steel toes, safety glasses, height-adjustable chairs and surfaces, and safe lift training. But AI is just harnessing data to help safety professionals make better decisions.”

“This technology might allow you to hone in on a specific worker’s risk profile and provide a coaching opportunity to say, ‘I noticed your assembly station is a little too low. Did you know your desk was adjustable? Let’s raise this up so that you’re not hunching forward and extending your elbows,’” White said.

As AI continues to evolve, White expects increasing adoption across industries, with policies likely emerging to address privacy concerns while maximizing safety benefits. Some of The Hartford’s customers have already purchased AI safety systems for independent use after successful pilots, demonstrating growing confidence in the technology.

“We’ve had some customers who, after piloting these technologies with us, have independently purchased them for in-house use. This gives us better opportunities to collaborate with them and helps us manage our time more effectively. Empowering our customers with tools to better manage risk in-house not only benefits the customer and The Hartford, but society also benefits with a healthier workforce,” White said. &



Source link

Continue Reading

Trending