Connect with us

AI Research

Perplexity launches "Comet" AI web browser to take on Chrome and Edge — and you can use it today for $200 a month – Windows Central

Published

on

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

Google’s open MedGemma AI models could transform healthcare

Published

on


Instead of keeping their new MedGemma AI models locked behind expensive APIs, Google will hand these powerful tools to healthcare developers.

The new arrivals are called MedGemma 27B Multimodal and MedSigLIP and they’re part of Google’s growing collection of open-source healthcare AI models. What makes these special isn’t just their technical prowess, but the fact that hospitals, researchers, and developers can download them, modify them, and run them however they see fit.

Google’s AI meets real healthcare

The flagship MedGemma 27B model doesn’t just read medical text like previous versions did; it can actually “look” at medical images and understand what it’s seeing. Whether it’s chest X-rays, pathology slides, or patient records potentially spanning months or years, it can process all of this information together, much like a doctor would.

The performance figures are quite impressive. When tested on MedQA, a standard medical knowledge benchmark, the 27B text model scored 87.7%. That puts it within spitting distance of much larger, more expensive models whilst costing about a tenth as much to run. For cash-strapped healthcare systems, that’s potentially transformative.

The smaller sibling, MedGemma 4B, might be more modest in size but it’s no slouch. Despite being tiny by modern AI standards, it scored 64.4% on the same tests, making it one of the best performers in its weight class. More importantly, when US board-certified radiologists reviewed chest X-ray reports it had written, they deemed 81% accurate enough to guide actual patient care.

MedSigLIP: A featherweight powerhouse

Alongside these generative AI models, Google has released MedSigLIP. At just 400 million parameters, it’s practically featherweight compared to today’s AI giants, but it’s been specifically trained to understand medical images in ways that general-purpose models cannot.

This little powerhouse has been fed a diet of chest X-rays, tissue samples, skin condition photos, and eye scans. The result? It can spot patterns and features that matter in medical contexts whilst still handling everyday images perfectly well.

MedSigLIP creates a bridge between images and text. Show it a chest X-ray, and ask it to find similar cases in a database, and it’ll understand not just visual similarities but medical significance too.

Healthcare professionals are putting Google’s AI models to work

The proof of any AI tool lies in whether real professionals actually want to use it. Early reports suggest doctors and healthcare companies are excited about what these models can do.

DeepHealth in Massachusetts has been testing MedSigLIP for chest X-ray analysis. They’re finding it helps spot potential problems that might otherwise be missed, acting as a safety net for overworked radiologists. Meanwhile, at Chang Gung Memorial Hospital in Taiwan, researchers have discovered that MedGemma works with traditional Chinese medical texts and answers staff questions with high accuracy.

Tap Health in India has highlighted something crucial about MedGemma’s reliability. Unlike general-purpose AI that might hallucinate medical facts, MedGemma seems to understand when clinical context matters. It’s the difference between a chatbot that sounds medical and one that actually thinks medically.

Why open-sourcing the AI models is critical in healthcare

Beyond generosity, Google’s decision to make these models is also strategic. Healthcare has unique requirements that standard AI services can’t always meet. Hospitals need to know their patient data isn’t leaving their premises. Research institutions need models that won’t suddenly change behaviour without warning. Developers need the freedom to fine-tune for very specific medical tasks.

By open-sourcing the AI models, Google has addressed these concerns with healthcare deployments. A hospital can run MedGemma on their own servers, modify it for their specific needs, and trust that it’ll behave consistently over time. For medical applications where reproducibility is crucial, this stability is invaluable.

However, Google has been careful to emphasise that these models aren’t ready to replace doctors. They’re tools that require human oversight, clinical correlation, and proper validation before any real-world deployment. The outputs need checking, the recommendations need verifying, and the decisions still rest with qualified medical professionals.

This cautious approach makes sense. Even with impressive benchmark scores, medical AI can still make mistakes, particularly when dealing with unusual cases or edge scenarios. The models excel at processing information and spotting patterns, but they can’t replace the judgment, experience, and ethical responsibility that human doctors bring.

What’s exciting about this release isn’t just the immediate capabilities, but what it enables. Smaller hospitals that couldn’t afford expensive AI services can now access cutting-edge technology. Researchers in developing countries can build specialised tools for local health challenges. Medical schools can teach students using AI that actually understands medicine.

The models are designed to run on single graphics cards, with the smaller versions even adaptable for mobile devices. This accessibility opens doors for point-of-care AI applications in places where high-end computing infrastructure simply doesn’t exist.

As healthcare continues grappling with staff shortages, increasing patient loads, and the need for more efficient workflows, AI tools like Google’s MedGemma could provide some much-needed relief. Not by replacing human expertise, but by amplifying it and making it more accessible where it’s needed most.

(Photo by Owen Beard)

See also: Tencent improves testing creative AI models with new benchmark

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.



Source link

Continue Reading

AI Research

Pope: AI development must build bridges of dialogue and promote fraternity

Published

on


In a message signed by the Cardinal Secretary of State Pietro Parolin, to the United Nations’ AI for Good Summit happening in Geneva, Pope Leo XIV encourages nations to create frameworks and regulations to work for the common good.

By Isabella H. de Carvalho

Pope Leo XIV encouraged nations to establish frameworks and regulations on AI so that it can be developed and used according to the common good, in a message sent on July 10 to the participants of the AI for Good Summit, taking place in Geneva, Switzerland, from July 8 to 11.  

“I would like to take this opportunity to encourage you to seek ethical clarity and to establish a coordinated local and global governance of AI, based on the shared recognition of the inherent dignity and fundamental freedoms of the human person”, the message, signed by the Secretary of State, Cardinal Pietro Parolin, said.

The summit is organized by the United Nations’ International Telecommunication Union (ITU) and is co-hosted by the Swiss government. The event sees the participation of governments, tech leaders, academics and others who are interested and work with AI.

In this “era of profound innovation” where many are reflecting on “what it means to be human”, the world “is at crossroads, facing the immense potential generated by the digital revolution driven by Artificial Intelligence”, the Pope highlighted in his message. 

AI requires ethical management and regulatory frameworks 

“As AI becomes capable of adapting autonomously to many situations by making purely technical algorithmic choices, it is crucial to consider its anthropological and ethical implications, the values at stake and the duties and regulatory frameworks required to uphold those values”, the Pope underlined in his message. 

He emphasized that the “responsibility for the ethical use of AI systems begins with those who develop, manage and oversee them” but users also need to share this mission. AI “requires proper ethical management and regulatory frameworks centered on the human person, and which goes beyond the mere criteria of utility or efficiency,” the Pope insisted. 

Building peaceful societies 

Citing St. Augustine’s concept of the “tranquility of order”, Pope Leo highlighted that this should be the common goal and thus AI should foster “more human order of social relations” and “peaceful and just societies in the service of integral human development and the good of the human family”. 

While AI can simulate human reasoning and perform tasks quickly and efficiently or transform areas such as “education, work, art, healthcare, governance, the military, and communication”, “it cannot replicate moral discernment or the ability to form genuine relationships”, Pope Leo warned. 

For him the development of this technology “must go hand in hand with respect for human and social values, the capacity to judge with a clear conscience, and growth in human responsibility”. It requires “discernment to ensure that AI is developed and utilized for the common good, building bridges of dialogue and fostering fraternity”, the Pope urged. AI needs to serve “the interests of humanity as a whole”.



Source link

Continue Reading

AI Research

AI slows down some experienced software developers, study finds

Published

on


By Anna Tong

SAN FRANCISCO (Reuters) -Contrary to popular belief, using cutting-edge artificial intelligence tools slowed down experienced software developers when they were working in codebases familiar to them, rather than supercharging their work, a new study found.

AI research nonprofit METR conducted the in-depth study on a group of seasoned developers earlier this year while they used Cursor, a popular AI coding assistant, to help them complete tasks in open-source projects they were familiar with.

Before the study, the open-source developers believed using AI would speed them up, estimating it would decrease task completion time by 24%. Even after completing the tasks with AI, the developers believed that they had decreased task times by 20%. But the study found that using AI did the opposite: it increased task completion time by 19%.

The study’s lead authors, Joel Becker and Nate Rush, said they were shocked by the results: prior to the study, Rush had written down that he expected “a 2x speed up, somewhat obviously.”

The findings challenge the belief that AI always makes expensive human engineers much more productive, a factor that has attracted substantial investment into companies selling AI products to aid software development.

AI is also expected to replace entry-level coding positions. Dario Amodei, CEO of Anthropic, recently told Axios that AI could wipe out half of all entry-level white collar jobs in the next one to five years.

Prior literature on productivity improvements has found significant gains: one study found using AI sped up coders by 56%, another study found developers were able to complete 26% more tasks in a given time.

But the new METR study shows that those gains don’t apply to all software development scenarios. In particular, this study showed that experienced developers intimately familiar with the quirks and requirements of large, established open source codebases experienced a slowdown.

Other studies often rely on software development benchmarks for AI, which sometimes misrepresent real-world tasks, the study’s authors said.

The slowdown stemmed from developers needing to spend time going over and correcting what the AI models suggested.

“When we watched the videos, we found that the AIs made some suggestions about their work, and the suggestions were often directionally correct, but not exactly what’s needed,” Becker said.

The authors cautioned that they do not expect the slowdown to apply in other scenarios, such as for junior engineers or engineers working in codebases they aren’t familiar with.

Still, the majority of the study’s participants, as well as the study’s authors, continue to use Cursor today. The authors believe it is because AI makes the development experience easier, and in turn, more pleasant, akin to editing an essay instead of staring at a blank page.

“Developers have goals other than completing the task as soon as possible,” Becker said. “So they’re going with this less effortful route.”

(Reporting by Anna Tong in San Francisco; Editing by Sonali Paul)



Source link

Continue Reading

Trending