AI Research
Google’s open MedGemma AI models could transform healthcare
Instead of keeping their new MedGemma AI models locked behind expensive APIs, Google will hand these powerful tools to healthcare developers.
The new arrivals are called MedGemma 27B Multimodal and MedSigLIP and they’re part of Google’s growing collection of open-source healthcare AI models. What makes these special isn’t just their technical prowess, but the fact that hospitals, researchers, and developers can download them, modify them, and run them however they see fit.
Google’s AI meets real healthcare
The flagship MedGemma 27B model doesn’t just read medical text like previous versions did; it can actually “look” at medical images and understand what it’s seeing. Whether it’s chest X-rays, pathology slides, or patient records potentially spanning months or years, it can process all of this information together, much like a doctor would.
The performance figures are quite impressive. When tested on MedQA, a standard medical knowledge benchmark, the 27B text model scored 87.7%. That puts it within spitting distance of much larger, more expensive models whilst costing about a tenth as much to run. For cash-strapped healthcare systems, that’s potentially transformative.
The smaller sibling, MedGemma 4B, might be more modest in size but it’s no slouch. Despite being tiny by modern AI standards, it scored 64.4% on the same tests, making it one of the best performers in its weight class. More importantly, when US board-certified radiologists reviewed chest X-ray reports it had written, they deemed 81% accurate enough to guide actual patient care.
MedSigLIP: A featherweight powerhouse
Alongside these generative AI models, Google has released MedSigLIP. At just 400 million parameters, it’s practically featherweight compared to today’s AI giants, but it’s been specifically trained to understand medical images in ways that general-purpose models cannot.
This little powerhouse has been fed a diet of chest X-rays, tissue samples, skin condition photos, and eye scans. The result? It can spot patterns and features that matter in medical contexts whilst still handling everyday images perfectly well.
MedSigLIP creates a bridge between images and text. Show it a chest X-ray, and ask it to find similar cases in a database, and it’ll understand not just visual similarities but medical significance too.
Healthcare professionals are putting Google’s AI models to work
The proof of any AI tool lies in whether real professionals actually want to use it. Early reports suggest doctors and healthcare companies are excited about what these models can do.
DeepHealth in Massachusetts has been testing MedSigLIP for chest X-ray analysis. They’re finding it helps spot potential problems that might otherwise be missed, acting as a safety net for overworked radiologists. Meanwhile, at Chang Gung Memorial Hospital in Taiwan, researchers have discovered that MedGemma works with traditional Chinese medical texts and answers staff questions with high accuracy.
Tap Health in India has highlighted something crucial about MedGemma’s reliability. Unlike general-purpose AI that might hallucinate medical facts, MedGemma seems to understand when clinical context matters. It’s the difference between a chatbot that sounds medical and one that actually thinks medically.
Why open-sourcing the AI models is critical in healthcare
Beyond generosity, Google’s decision to make these models is also strategic. Healthcare has unique requirements that standard AI services can’t always meet. Hospitals need to know their patient data isn’t leaving their premises. Research institutions need models that won’t suddenly change behaviour without warning. Developers need the freedom to fine-tune for very specific medical tasks.
By open-sourcing the AI models, Google has addressed these concerns with healthcare deployments. A hospital can run MedGemma on their own servers, modify it for their specific needs, and trust that it’ll behave consistently over time. For medical applications where reproducibility is crucial, this stability is invaluable.
However, Google has been careful to emphasise that these models aren’t ready to replace doctors. They’re tools that require human oversight, clinical correlation, and proper validation before any real-world deployment. The outputs need checking, the recommendations need verifying, and the decisions still rest with qualified medical professionals.
This cautious approach makes sense. Even with impressive benchmark scores, medical AI can still make mistakes, particularly when dealing with unusual cases or edge scenarios. The models excel at processing information and spotting patterns, but they can’t replace the judgment, experience, and ethical responsibility that human doctors bring.
What’s exciting about this release isn’t just the immediate capabilities, but what it enables. Smaller hospitals that couldn’t afford expensive AI services can now access cutting-edge technology. Researchers in developing countries can build specialised tools for local health challenges. Medical schools can teach students using AI that actually understands medicine.
The models are designed to run on single graphics cards, with the smaller versions even adaptable for mobile devices. This accessibility opens doors for point-of-care AI applications in places where high-end computing infrastructure simply doesn’t exist.
As healthcare continues grappling with staff shortages, increasing patient loads, and the need for more efficient workflows, AI tools like Google’s MedGemma could provide some much-needed relief. Not by replacing human expertise, but by amplifying it and making it more accessible where it’s needed most.
(Photo by Owen Beard)
See also: Tencent improves testing creative AI models with new benchmark
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
AI Research
Digital Agency Fuel Online Launches AI SEO Research Division,
Boston, MA – As Google continues to reshape the digital landscape with its Search Generative Experience (SGE) and AI-powered search results, Fuel Online [https://fuelonline.com/] is blazing a trail as the nation’s leading agency in AI SEO [https://fuelonline.com/]and SGE optimization [https://fuelonline.com/].
Recognizing the urgent need for businesses to adapt to AI-first search engines, Fuel Online has launched a dedicated AI SEO Research & Development Division focused exclusively on decoding how AI models like Google SGE read, rank, and render web content. The division’s mission: to test, reverse-engineer, and deploy cutting-edge strategies that future-proof clients’ visibility in an era of AI-generated search answers.
“AI is not the future of SEO – it’s the present . If your content doesn’t rank in SGE, it may never be seen. That’s why we’re investing heavily in understanding and optimizing for how large language models surface content,” said Scott Levy, CEO of Fuel Online Digital Marketing Agency [https://fuelonline.com/].
Fuel Online’s Digital Marketing team is already helping Fortune 500 brands, high-growth startups, and ecommerce leaders gain traction in AI-powered results using proprietary tactics including:
* NLP entity linking & semantic schema
* SGE-optimized content blocks & voice search targeting
* AI-readiness audits tailored for Google’s evolving ranking models
As detailed in their comprehensive Google SGE & AI Optimization Guide [https://fuelonline.com/insights/google-sge-and-ai-optimization-guide-how-to-optimize/], Fuel Online offers strategic insight into aligning websites with Google’s new generative layer. The agency also provides live testing environments, allowing clients to see firsthand how AI engines interpret their content. Why This Matters: According to industry data, click-through rates have dropped by up to 60% on some keywords since the rollout of SGE, as users get direct AI-generated answers instead of traditional blue links. Fuel Online’s AI SEO division helps clients reclaim that lost visibility and win placement inside AI search results. With over two decades of award-winning digital strategy under its belt and a reputation as one of the top digital marketing agencies in the U.S., Fuel Online is once again setting the standard – this time for the AI optimization era.
Media Contact:
Fuel Online
Boston, MA
(888)-475-2552
https://FuelOnline.com
Media Contact
Company Name: Fuel Online
Contact Person: Media Relation Management
Email:Send Email [https://www.abnewswire.com/email_contact_us.php?pr=digital-agency-fuel-online-launches-ai-seo-research-division-cementing-its-position-at-the-forefront-of-sge-optimization]
Phone: (888)-475-2552
City: Boston
State: MA
Country: United States
Website: https://fuelonline.com
Legal Disclaimer: Information contained on this page is provided by an independent third-party content provider. ABNewswire makes no warranties or responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you are affiliated with this article or have any complaints or copyright issues related to this article and would like it to be removed, please contact retract@swscontact.com
This release was published on openPR.
AI Research
AI, Wrong Guy: Investigating the use and dangers of artificial intelligence in Jacksonville policing – Action News Jax
AI Research
Investigating the use and dangers of artificial intelligence in Jacksonville policing
A Lee County man was wrongfully arrested last year after AI facial recognition technology used by the Jacksonville Sheriff’s Office got it wrong. Experts are now warning about the potential dangers of the technology.
The Jacksonville Beach Police Department said 51-year-old Robert Dillon allegedly tried luring a 12-year-old child in Jacksonville Beach back in November of 2023. According to a police report, Dillon was linked to a suspect caught on surveillance video in a Jacksonville Beach McDonald’s through the use of the Jacksonville Sheriff’s Office’s AI facial recognition technology.
Jacksonville Beach PD conferred with JSO, according to the report, and the technology found a 93% match between Dillon and the suspect using that technology. The report says police then provided a photo spread of Dillon and other similar-looking individuals to two witnesses. Both identified Dillon as the suspect.
However, the case would later be completely dropped. The state attorney’s office told Action News Jax the arrest will be wiped from Mr. Dillon’s record.
“Police are not allowed under the Constitution to arrest somebody without probable cause,” Nate Freed-Wessler with the American Civil Liberties Union would later tell Action News Jax. “And this technology expressly cannot provide probable cause, it is so glitchy, it’s so unreliable. At best, it has to be viewed as an extremely unreliability lead because it often, often gets it wrong.”
Freed-Wessler is the deputy director for the ACLU’s Speech, Privacy, and Technology Project. He was also part of the legal team that helped sue on behalf of Robert Williams – a Detroit man wrongfully arrested thanks to facial recognition similar to the technology used to identify Dillon. The Detroit Police department settled that case for $300,000 in damages, and implemented safeguards when using AI facial recognition in their investigations.
Freed-Wessler told Action News Jax that wrongful arrests using AI facial recognition are more common than many think, especially among people of color.
“It’s partly because of photo quality problems in low light situations, when the cameras are trying to identify darker skin people,” Freed-Wessler explained. “In fact, in almost all of the wrongful arrest cases around the country that we know of, it’s been black people who have been incorrectly, wrongfully picked up by police.”
Action News Jax sat down with Jacksonville Sheriff T.K. Waters to discuss the use of AI facial recognition technology in Jacksonville Sheriff’s Office investigations. Sheriff Waters reassured the technology is simply a small piece of the investigative puzzle.
“If you came to me with a facial recognition hit and that was your probable cause, I would probably kick you out of my office because that’s not how it works,” Sheriff Waters explained. “And I can’t speak to [the Jacksonville Beach Police Department’s] investigation. I can tell you this, there better be a lot more that goes along with that to help make sure that we have the proper individual too.”
[DOWNLOAD: Free Action News Jax app for alerts as news breaks]
However, Freed-Wessler believes this procedure wasn’t properly followed by Jacksonville Beach police in their investigation, adding that photo spreads based on a facial recognition match aren’t sufficient evidence to make an arrest.
“When this technology gets it wrong, it’s going to get it wrong with a face of somebody who looks similar to the suspect,” Freed-Wessler explained. “It’s no surprise that when police juice a lineup procedure with a doppelganger, with a lookalike, a witness is going to choose an innocent person.”
Now, the Jacksonville Beach Police Department tells Action News Jax the investigation is still open after Dillon was cleared of any wrongdoing, adding in part:
“We will not be commenting on this matter beyond stating that all warrant requests are submitted to the state attorney’s office. It is solely their decision whether or not to move forward with issuing a warrant.”
Action News Jax reached out to the state attorney’s office as well. A spokesman only confirmed Dillon was cleared of any wrongdoing.
Now, Dillon’s lawyer tells Action News Jax that he is seeking compensation, although he and Dillon declined interview requests.
Meanwhile Courtney Barclay, an AI policy expert at Jacksonville University, said law enforcement agencies across the nation will continue to use AI and facial recognition. Barclay outlined the need to always second-guess.
“Every industry is just now starting to scratch the surface of the potential of AI, how it can impact our society. Law enforcement is no exception,” Barclay said. “And so, again, we just want to be cognizant of the risks.”
>>> STREAM ACTION NEWS JAX LIVE <<<
[SIGN UP: Action News Jax Daily Headlines Newsletter]
-
Funding & Business1 week ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Jobs & Careers1 week ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Mergers & Acquisitions1 week ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies
-
Funding & Business1 week ago
Rethinking Venture Capital’s Talent Pipeline
-
Jobs & Careers1 week ago
Why Agentic AI Isn’t Pure Hype (And What Skeptics Aren’t Seeing Yet)
-
Education3 days ago
9 AI Ethics Scenarios (and What School Librarians Would Do)
-
Education4 days ago
Teachers see online learning as critical for workforce readiness in 2025
-
Education4 days ago
Nursery teachers to get £4,500 to work in disadvantaged areas
-
Education6 days ago
How ChatGPT is breaking higher education, explained
-
Education1 week ago
AERDF highlights the latest PreK-12 discoveries and inventions