Connect with us

AI Insights

Artificial intelligence is revolutionising medical image analysis

Published

on







By 
Naomi Stekelenburg

6 August 2025
4 min read





Key points

  • AI is now a prominent feature of the healthcare landscape.
  • One type of AI called visual language models is being used to “read” X-rays and generate reports.
  • The technology will not replace human analysis but provide a tool to support radiologists.



One in two Australians regularly use artificial intelligence (AI), with that number expected to grow. AI is showing up in our lives more prominently than ever, with the arrival of ChatGPT and other chatbots.

Researchers at CSIRO’s Australian e-Health Research Centre (AEHRC) are exploring how AI – including the systems that underpin chatbots – can be leveraged for a more altruistic endeavour: to revolutionise healthcare.

Earlier versions of ChatGPT were built on an AI system called a large language model (LLM) and were entirely text-based. You would ‘talk’ to it by entering text.

The latest version of ChatGPT, for instance, incorporates visual-language models (VLM) which add visual understanding on top of the LLM’s language skills. This allows it to ‘see’, describe what it ‘sees’ and connect it to language.

AEHRC researchers are now using VLMs to help interpret medical images such as X-rays.

It’s complicated technology, but the aim is straightforward: to support radiologists and reduce the burden on them.






This work enables automated reporting of X-rays

Visual language models are transforming X-ray analysis

Dr Aaron Nicolson, Research Scientist at AEHRC, is one of the researchers working on the project.

He said any kind of image can be used with VLMs, but his team is focusing on chest X-rays.

Chest X-rays are used for many important reasons, including to diagnose heart and respiratory conditions, screen for lung cancers and to check the positioning of medical devices such as pacemakers.

Typically, trained specialists – radiologists – are required to interpret the complex images and produce a diagnostic report.

But in Australia, radiologists are overburdened.

“There are too few radiologists for the mountain of work that needs to be completed,” Aaron said.

The problem will likely get worse with the number of patients and chest X-rays taken set to keep increasing, especially as the population ages.

That’s why Aaron is developing a model that uses a VLM to produce radiology reports from chest X-rays.

“The goal is to create technology that can integrate into radiologists’ workflow and provide assistance,” he said.






Aaron Nicolson working on his model for automated X-ray reporting

Practice makes (almost) perfect

Training the VLM involves lots of data. The more information a model has, the better it can make predictions.

The VLM is given the same information that a radiologist would receive – X-ray images and the patient’s referral, Aaron explained.

“Then we give the model the matching radiology report written by a radiologist. The model learns to produce a report based on the images and information it is given,” he said.

Like humans, AI models improve by practising.

“We train the model using hundreds and thousands of X-rays. As the model trains on more data, it can produce more accurate reports,” said Aaron.

At this stage of his research, Aaron was looking to improve the accuracy of the reports even further – so he decided to try something new.

“We gave model the patient’s records from the emergency department as well,” he said.

“That means information like the patient’s chief complaint when triaged, their vital signs over the course of the stay, the medications they usually take and the medications administered during the patient’s stay.”

Just as he had hoped, giving the model this extra information improved the accuracy of the radiology reports.

“We are trying to get the technology to a point where it can be considered for prospective trials. This is a big step in that direction,” he said.






Workflow of the large language model.

Ethical and applicable AI

As well as generating diagnostic reports from chest X-ray images, AEHRC is exploring other applications of VLMs.

Dr Arvin Zhuang, at post-doc at AEHRC is using VLMs to retrieve information from images of medical documents. Processing the documents as an image rather than text enables the information to be retrieved more efficiently.

It’s an exciting time for Aaron and Arvin, but ethical and safety considerations are always at the front of their minds.

“We want to make sure that the model is effective for all populations. To do that, we have to consider and manage issues like demographic biases in the data we train our models on,” Aaron said.

He also notes that the technology is not designed to replace medical specialists.

“The technology will not be making clinical decisions by itself. There will always be a radiologist in the loop,” Aaron said.

Aaron and his team are currently conducting a trial of the technology in collaboration with the Princess Alexandra Hospital in Brisbane, assessing how the AI-generated reports compare with those produced by human radiologists.

They are also actively seeking additional clinical sites to participate in further trials, to evaluate the technology’s effectiveness across a broader range of settings.
















Source link

AI Insights

What the Tech? Browser with built-in artificial intelligence may change how you search | What The Tech?

Published

on


If you use Google Chrome as your primary browser, you’re not alone. It’s the world’s most popular browser, but there are other choices.

And now, one with built-in artificial intelligence may change how you search.

Jamey Tucker shows us, in “What the Tech.”



Source link

Continue Reading

AI Insights

Artificial intelligence offering political practices advice about robocalls in Montana GOP internal spat

Published

on


A version of this story first appeared in Capitolized, a weekly newsletter featuring expert reporting, analysis and insight from the editors and reporters of Montana Free Press. Want to see Capitolized in your inbox every Thursday? Sign up here.


The robocalls to John Sivlan’s phone this summer just wouldn’t let up. Recorded messages were coming in several times a day from multiple phone numbers, all trashing state Republican Rep. Llew Jones, a shrewd, 11-term lawmaker with an earned reputation for skirting party hardliners to pass the Legislature’s biggest financial bills, including the state budget. 

Sivlan, 80, a lifelong Republican who lives in Jones’ northcentral Montana hometown of Conrad, wasn’t amused by the general election-style attacks hitting his phone nearly a year before the next legislative primary. Jones, in turn, wasn’t impressed with the Commissioner of Political Practices’ advice that nothing could be done about the calls. The COPP polices campaigns and lobbying in Montana, and the opinion the office issued in response to a request from Jones to review the robocalls was written not by an office employee but instead authored by ChatGPT. 

“They were coming in hot and heavy in July,” Sivlan said on Aug. 26 while scrolling through his messages. “There must be dozens of these.”

“Did you know that Llew Jones sides with Democrats more than any other Republican in the Montana Legislature? If he wants to vote with Democrats, Jones should at least switch parties,” the robocalls said.

“And then they list his number and tell you to call him and tell him,” Sivlan continued.

In addition to the robocalls, a string of ads running on streaming services targeted Jones. On social media, placement ads depicted Jones as the portly, white-suited county commissioner Boss Hogg from “The Dukes of Hazzard” TV comedy of the early 1980s. None of the ads or calls disclosed who was paying for them.

Jones told Capitolized that voters were annoyed by the messaging, but said most people he’s talked to weren’t buying into it. He assumes the barrage was timed to reach voters before his own campaign outreach for the June 2026 primary.

The COPP’s new AI helper concluded that only ads appearing within 60 days of an election could be regulated by the office. The ads would also have to expressly advise the public on how to vote to fall under campaign finance reporting requirements.

In the response emailed to Jones, the AI program followed its opinion with a very chipper “Would you like guidance on how to monitor or respond to such ads effectively?”

“I felt that it was OK,” Commissioner Chris Gallus said of the AI opinion provided to Jones. “There were some things that I probably would have been more thorough about. Really at this point I wanted Llew to see where we were at that time with the (AI) build-out, more than explicit instructions.”

The plan is to prepare the COPP’s AI system for the coming 2026 primary elections, at which point members of the COPP staff will review the bot’s responses and supplement when necessary. But the system is already on the commissioner’s website, offering advice based solely on Montana laws and COPP’s own data, and not on what it might scrounge from the internet, according to Gallus.

Earlier this year, the Legislature put limits on AI use by government agencies, including a requirement for government disclosure and oversight of decisions and recommendations made by AI systems. The bill, by Rep. Braxton Mitchell, R-Columbia Falls, was opposed by only a handful of lawmakers.

Gallus said the artificial intelligence system at COPP is being built by 3M Data, a vendor with previous experience with machine learning for the Red Cross and the oil companies Shell and Exxon, where systems gathered and analyzed copious amounts of operational data. COPP has about $38,000 to work with, Gallus said.

The pre-primary battles within the Montana Republican Party are giving the COPP’s machine learning an early test, while also exposing loopholes in campaign reporting laws. 

There is no disclosure law for the ads placed on streaming services, unlike ad details for traditional radio and TV stations, cable and satellite, which must be available for public inspection under Federal Communications Commission law. The state would have to fill that gap, which the FCC and Federal Election Commission have struggled to do since 2011. 

Streaming now accounts for 45% of all TV viewing, according to Nielsen, more than broadcast and cable combined. Cable viewership has declined 39% since 2021.

“When we asked KSEN (a popular local radio station) who was paying for the ads, they didn’t know,” Jones said. “People were listening on Alexa.”

Nonetheless, Jones said the robocalls are coming from within the Republican house. An effort by hardliners to purge more centrists legislators from the party has been underway since April, when the MTGOP executive board began “rescinding recognition” of the state Republican senators who collaborated with a bipartisan group of Democrats and House Republicans to pass a budget, increase teacher pay and lower taxes on primary homes.

Being Republican doesn’t require recognition by the MTGOP  “e-board,” as it’s known. In June, when the party chose new leadership, newly elected Chair Art Wittich said the party would no longer stay neutral in primary elections and would look for conservative candidates to support. 

Republicans who have registered campaigns for the Legislature were issued questionnaires Aug. 17 by the Conservative Governance Committee, a group chaired by Keith Regier, a former state legislator and father of a Flathead County family that’s sent three members to the Montana Legislature; in 2023 Keith  Regier and two of his children served in the Legislature simultaneously.

Membership for the Conservative Governance Committee and a new Red Policy Committee to prioritize legislative priorities is still a work in progress, new party spokesman Ethan Holmes said this week. 

The 14 questions, which Regier informed candidates could be used to determine party support of campaigns, hit on standard Republican fare: guns, “thoughts on transgenderism,” and at what point human life starts. There was no question about a willingness to follow caucus leadership. Regier’s son, Matt, was elected Senate president late 2024, but lost control of his caucus on the first day of the legislative session in January.



Source link

Continue Reading

AI Insights

“AI Is Not Intelligent at All” – Expert Warns of Worldwide Threat to Human Dignity

Published

on


Credit: Shutterstock

Opaque AI systems risk undermining human rights and dignity. Global cooperation is needed to ensure protection.

The rise of artificial intelligence (AI) has changed how people interact, but it also poses a global risk to human dignity, according to new research from Charles Darwin University (CDU).

Lead author Dr. Maria Randazzo, from CDU’s School of Law, explained that AI is rapidly reshaping Western legal and ethical systems, yet this transformation is eroding democratic principles and reinforcing existing social inequalities.

She noted that current regulatory frameworks often overlook basic human rights and freedoms, including privacy, protection from discrimination, individual autonomy, and intellectual property. This shortfall is largely due to the opaque nature of many algorithmic models, which makes their operations difficult to trace.

The black box problem

Dr. Randazzo described this lack of transparency as the “black box problem,” noting that the decisions produced by deep-learning and machine-learning systems cannot be traced by humans. This opacity makes it challenging for individuals to understand whether and how an AI model has infringed on their rights or dignity, and it prevents them from effectively pursuing justice when such violations occur.

Dr Maria Randazzo
Dr. Maria Randazzo has found AI has reshaped Western legal and ethical landscapes at unprecedented speed. Credit: Charles Darwin University

“This is a very significant issue that is only going to get worse without adequate regulation,” Dr. Randazzo said.

“AI is not intelligent in any human sense at all. It is a triumph in engineering, not in cognitive behaviour.

“It has no clue what it’s doing or why – there’s no thought process as a human would understand it, just pattern recognition stripped of embodiment, memory, empathy, or wisdom.”

Global approaches to AI governance

Currently, the world’s three dominant digital powers – the United States, China, and the European Union – are taking markedly different approaches to AI, leaning on market-centric, state-centric, and human-centric models, respectively.

Dr. Randazzo said the EU’s human-centric approach is the preferred path to protect human dignity, but without a global commitment to this goal, even that approach falls short.

“Globally, if we don’t anchor AI development to what makes us human – our capacity to choose, to feel, to reason with care, to empathy and compassion – we risk creating systems that devalue and flatten humanity into data points, rather than improve the human condition,” she said.

“Humankind must not be treated as a means to an end.”

Reference: “Human dignity in the age of Artificial Intelligence: an overview of legal issues and regulatory regimes” by Maria Salvatrice Randazzo and Guzyal Hill, 23 April 2025, Australian Journal of Human Rights.
DOI: 10.1080/1323238X.2025.2483822

The paper is the first in a trilogy Dr. Randazzo will produce on the topic.

Never miss a breakthrough: Join the SciTechDaily newsletter.



Source link

Continue Reading

Trending