Connect with us

AI Research

Artificial intelligence is revolutionizing medical image analysis

Published

on


Aaron Nicolson working on his model for automated X-ray reporting. Credit: CSIRO

One in two Australians regularly use artificial intelligence (AI), with that number expected to grow. AI is showing up in our lives more prominently than ever, with the arrival of ChatGPT and other chatbots.

Researchers at CSIRO’s Australian e-Health Research Center (AEHRC) are exploring how AI—including the systems that underpin chatbots—can be leveraged for a more altruistic endeavor: to revolutionize health care.

Earlier versions of ChatGPT were built on an AI system called a (LLM) and were entirely text-based. You would “talk” to it by entering text.

The latest version of ChatGPT, for instance, incorporates visual-language models (VLM) which add visual understanding on top of the LLM’s language skills. This allows it to “see,” describe what it “sees” and connect it to language.

AEHRC researchers are now using VLMs to help interpret such as X-rays.

It’s complicated technology, but the aim is straightforward: to support radiologists and reduce the burden on them.

Visual language models are transforming X-ray analysis

Dr. Aaron Nicolson, Research Scientist at AEHRC, is one of the researchers working on the project.

He said any kind of image can be used with VLMs, but his team is focusing on chest X-rays.

Chest X-rays are used for many important reasons, including to diagnose heart and respiratory conditions, screen for lung cancers and to check the positioning of medical devices such as pacemakers.

Typically, trained specialists—radiologists—are required to interpret the complex images and produce a diagnostic report.

But in Australia, radiologists are overburdened.

“There are too few radiologists for the mountain of work that needs to be completed,” Nicolson said.

The problem will likely get worse with the number of patients and chest X-rays taken set to keep increasing, especially as the population ages.

That’s why Nicolson is developing a model that uses a VLM to produce radiology reports from chest X-rays.

“The goal is to create technology that can integrate into radiologists’ workflow and provide assistance,” he said.

Practice makes (almost) perfect

Training the VLM involves lots of data. The more information a model has, the better it can make predictions.

The VLM is given the same information that a would receive—X-ray images and the patient’s referral, Nicolson explained.

“Then we give the model the matching radiology report written by a radiologist. The model learns to produce a based on the images and information it is given,” he said.

Like humans, AI models improve by practicing.

“We train the model using hundreds and thousands of X-rays. As the model trains on more data, it can produce more accurate reports,” said Nicolson.

At this stage of his research, Nicolson was looking to improve the accuracy of the reports even further—so he decided to try something new.

“We gave model the patient’s records from the emergency department as well,” he said.

“That means information like the patient’s chief complaint when triaged, their over the course of the stay, the medications they usually take and the medications administered during the patient’s stay.”

Just as he had hoped, giving the model this extra information improved the accuracy of the radiology reports.

“We are trying to get the technology to a point where it can be considered for prospective trials. This is a big step in that direction,” he said.

Ethical and applicable AI

As well as generating diagnostic reports from chest X-ray images, AEHRC is exploring other applications of VLMs.

Dr. Arvin Zhuang, at post-doc at AEHRC is using VLMs to retrieve information from images of medical documents. Processing the documents as an image rather than text enables the information to be retrieved more efficiently.

It’s an exciting time for Nicolson and Zhuang, but ethical and safety considerations are always at the front of their minds.

“We want to make sure that the model is effective for all populations. To do that, we have to consider and manage issues like demographic biases in the data we train our models on,” Nicolson said.

He also notes that the technology is not designed to replace medical specialists.

“The technology will not be making clinical decisions by itself. There will always be a radiologist in the loop,” Nicolson said.

He and his team are currently conducting a trial of the technology in collaboration with the Princess Alexandra Hospital in Brisbane, assessing how the AI-generated reports compare with those produced by human radiologists.

They are also actively seeking additional clinical sites to participate in further trials, to evaluate the technology’s effectiveness across a broader range of settings.

Citation:
Artificial intelligence is revolutionizing medical image analysis (2025, August 10)
retrieved 10 August 2025
from https://medicalxpress.com/news/2025-08-artificial-intelligence-revolutionizing-medical-image.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

AI Research

Nursa Launches Artificial Intelligence for Nurse Scheduling

Published

on


Nursa Intelligence Assistant enables rapid posting of single or bulk shifts

SALT LAKE CITY, September 04, 2025–(BUSINESS WIRE)–Nursa, a nationwide platform that exists to put a nurse at the bedside of every patient in need, today announced the launch of an artificial intelligence assistant that enables healthcare facilities to rapidly generate shift listings within the Nursa platform. The first-of-its-kind smart scheduling tool helps organizations post single or bulk shifts within seconds so they can reach qualified, available clinicians immediately.

Active now within the Nursa platform, the Nursa Intelligence Assistant or “NIA,” allows post creation three ways: users can speak directly to NIA, describing their shift needs; they can take a photo of relevant shift information, even if it’s a handwritten scribble; and they can upload any spreadsheet or file used to track scheduling. From there, NIA fills in the details, letting users review and edit, and confirm pricing, before posting.

Carlee Scholl, staffing coordinator at Sullivan Park Care Center in Spokane, Wash., manages up to 150 shifts per month and recently began using NIA to schedule individual and bulk shifts. She described the experience as quick and accurate, with the AI assistant capturing all the details perfectly. “I just looked it over to make sure it was everything that I needed,” she said. “It was spot on.”

“Artificial Intelligence is opening up new opportunities to streamline cumbersome workflows so healthcare facilities can focus on the important business of delivering quality patient care,” said Curtis Anderson, CEO and founder of Nursa. “With NIA, facilities eliminate the repetitive typing and data entry of shift posting by generating one or thousands of shifts in just seconds. We’re redefining what fast and easy staffing feels like, and this is just the beginning.”

For more information on how Nursa helps healthcare facilities, hospitals and health systems solve staffing needs with qualified clinicians, visit nursa.com.

About Nursa

Nursa is a nationwide platform that exists to put a nurse at the bedside of every patient in need, removing the financial strain and operational gaps of traditional staffing agencies. Nursa’s technology enables hospitals, health systems, skilled nursing facilities and community organizations to easily secure reliable, qualified, nursing talent for per diem shifts and contract work. Founded in 2019 and headquartered in Salt Lake City, Nursa is trusted by a growing community of more than 3,400 facilities and 400,000 nurses nationwide and is accredited by The Joint Commission. For more information, visit nursa.com.



Source link

Continue Reading

AI Research

Researchers Empower AI Companions With Spatiotemporal Reasoning For Dynamic Real-world Understanding

Published

on


The ability to understand and respond to specific references within a video, relating to both where and when events occur, represents a crucial next step for artificial intelligence. Honglu Zhou, Xiangyu Peng, Shrikant Kendre, and colleagues at Salesforce AI Research address this challenge with Strefer, a novel framework that empowers Video LLMs with advanced spatiotemporal reasoning capabilities. Strefer generates synthetic instruction data, effectively teaching these models to interpret fine-grained spatial and temporal references within dynamic video footage, without relying on expensive or time-consuming human annotation. This approach significantly improves a Video LLM’s ability to understand complex instructions involving specific objects, locations, and moments in time, paving the way for more versatile and perceptually grounded AI companions capable of interacting with the real world. The results demonstrate that models trained with Strefer-generated data outperform existing methods on tasks requiring precise spatial and temporal understanding, establishing a new benchmark for instruction-tuned video analysis.

Data Synthesis and VLM Evaluation Strategies

This research details a project focused on building more robust and accurate Video Language Models (VLMs) to improve their ability to understand and reason about video content, particularly in complex scenarios involving temporal reasoning, object localization, and nuanced descriptions. The core goal is to address limitations of existing VLMs, which often struggle with tasks requiring precise temporal understanding or grounding in specific video segments. The project relies heavily on generating synthetic data to target the weaknesses of existing VLMs, challenging the model in areas where it struggles. This is achieved through a process called Strefer, and the data covers a wide range of tasks categorized as open-ended question answering, multiple-choice question answering, temporal reasoning, object localization, and reasoning about actions and behaviors.

The data format varies, specifying how much of the video is used as input, and whether frames are extracted from a segment or the full video. Many tasks have mask-refer versions, where the question focuses on a specific region of interest in the video, forcing the model to ground its answers in the visual content. To improve the model’s ability to understand time, the research uses a technique that discretizes continuous time into segments, representing each segment with a temporal token added to the language model’s vocabulary. This allows it to process time-related information more effectively. Existing models struggle with understanding complex video content when queries rely on precise spatial locations or specific moments in time. Strefer addresses this limitation by systematically creating detailed, object-centric metadata from videos, including the location of subjects and objects as tracked over time, and their associated actions. This innovative approach leverages a modular system of pre-trained models, including Large Language Models and multimodal vision foundation models, to pseudo-annotate videos with temporally dense information.

By building upon this structured metadata, Strefer guides language models in generating high-quality instruction data specifically designed to train Video LLMs in understanding and responding to complex spatiotemporal references. Unlike existing datasets, Strefer automatically produces instruction-response pairs at scale, grounded in the dynamic, object-centric structures within videos. Current models struggle with detailed spatial and temporal reasoning, particularly when interpreting gestures or time-based cues in user queries. Strefer addresses this limitation by automatically generating synthetic training data that includes rich, detailed information about objects, their locations, and actions occurring at specific moments in time. By using a combination of existing AI models to annotate videos with this detailed metadata, Strefer creates a large dataset without the need for costly human annotation.

Experiments demonstrate that video models trained with this synthetically generated data outperform existing models on tasks requiring spatial and temporal disambiguation, showing enhanced reasoning abilities. The authors acknowledge that the framework relies on the accuracy of the underlying AI models used for annotation. Future work may focus on refining the annotation process and exploring the application of Strefer to more complex real-world scenarios.

👉 More information
🗞 Strefer: Empowering Video LLMs with Space-Time Referring and Reasoning via Synthetic Instruction Data
🧠 ArXiv: https://arxiv.org/abs/2509.03501



Source link

Continue Reading

AI Research

Medical Library Discovery Service 2.0

Published

on


AI-powered discovery services are reshaping medical and academic research, helping institutions lead in innovation and evidence-based practice.

Combining AI-driven precision, an integrated knowledge hub, and actionable insights, a new AI discovery service by Ovid® is revolutionizing how healthcare and academic institutions manage research challenges. With tools designed to streamline workflows, enhance retrieval accuracy, and synthesize information, it ensures that institutions stay at the forefront of medical and academic advancements.

Understanding the challenges in research management

Healthcare and academic institutions operate as constantly evolving ecosystems powered by ongoing research and innovation. However, outdated tools and fragmented systems often hinder progress, leading to inefficiencies in critical workflows. Before implementing solutions, it is essential to fully comprehend the key challenges institutions face:

1. Siloed resources restrict innovation

Institutions often house thousands of vital resources — articles, guidelines, clinical tools — but these remain scattered across disconnected systems. Navigating this complex landscape is not only time-consuming but also limits the full potential of groundbreaking research.

2. Time constraints hamper impactful decision-making

Healthcare professionals and academic researchers alike face incredible pressure to deliver fast, precise outcomes. With traditional search systems requiring manual efforts to filter relevant data, precious time is wasted sifting through irrelevant or outdated resources.

3. Inefficient processes lead to missed opportunities

Fragmented research workflows create operational bottlenecks, delaying critical discoveries and increasing the risk of oversight in clinical and academic settings. Building more unified and efficient systems is essential to maximizing outcomes.

The Ovid Discovery AI solution

Ovid Discovery AI addresses these issues through a powerful combination of cutting-edge technology and user-centric design. By aligning directly with institutional needs, it transforms workflows into seamless processes, accelerates research, and empowers decision-makers with actionable insights.

1. Find what you need, fast

AI-powered search and contextual matching gets to the true meaning behind search queries, delivering relevant results and concise summaries, reducing irrelevant noise. Your AI Results Analysis Assistant also extracts study details and publication quality metrics so you can quickly assess evidence strength and make informed decisions with confidence.

AI biomedical semantic search & facets

Artificial intelligence transforms queries and content into semantic vectors — allowing the platform to return highly relevant results, even when different terminology is used. Users can refine their search with biomedical facets mapped to MeSH categories like diseases, drugs, and more.

AI contextual matching

Whether users search “impact of alcohol on depression” or “mental health effects of drinking,” Ovid Discovery AI surfaces the most meaningful research — not irrelevant noise.

AI generated summaries

Each result is accompanied by a reliable, AI-generated summary that synthesizes the most important takeaways and cites supporting sources — saving users time and guiding their next steps.

AI-powered search suggestions

Based on query context and user intent, the platform dynamically generates related search suggestions, guiding users toward deeper discovery.

AI Results Analysis Assistant

Support with assessing impact, methods, and outcomes at a glance. Suggested queries and main concepts guide deeper exploration, delivering the most relevant evidence with exceptional speed and accuracy.

2. Centralize all your resources

At the heart of Ovid Discovery AI is its centralized repository, which transforms fragmented institutional assets into an accessible, unified knowledge hub. Rather than navigating disparate systems, users can instantly access all licensed library resources alongside organizational best practices and proprietary documents.

This customizable repository removes traditional barriers to information access, enabling students, researchers, and clinicians to focus on leveraging knowledge rather than hunting for resources.

Plus, for organizations who have an Ovid® Synthesis subscription, users can Seamlessly send search results into new or existing projects in Ovid Synthesis directly from the Discovery interface. With a single click, users can begin analyzing and synthesizing the literature they just found — no downloads or separate systems required.

Then, finalized project summaries from Ovid Synthesis can be exported as PDFs and uploaded back into Ovid Discovery — making institutional evidence searchable, citable, and accessible to the broader organization or the public. Finally, administrators can view project activity, progress, and output in a standardized format, enabling more effective oversight across departments and initiatives.

3. Gain actionable library insights

Medical libraries can gather intelligence on resource usage and user behavior, enabling informed decision-making on content acquisition and library resource allocation, maximizing ROI. With the 360º Insights Dashboard and personalized reporting available, institutions can track resource usage, optimize content acquisition strategies, and identify emerging needs through data analytics embedded within the platform.

Unwavering support

A new standard for research excellence

Whether you are driving clinical excellence or conducting groundbreaking academic studies, Ovid Discovery AI is the ultimate tool for transforming your processes. From addressing outdated infrastructure to introducing streamlined workflows powered by AI, it sets a new benchmark for innovation and reliability.

The advanced search technologies, seamless centralized repository, integration across research platforms, and data-driven insights make it the definitive platform for institutions aiming to optimize outputs while minimizing inefficiencies. With this level of precision and efficiency, Ovid Discovery AI ensures users can access, analyze, and apply high-quality evidence to achieve results that matter.

At Wolters Kluwer Health, Customer Support is committed to your success. You’ll have a dedicated consultant and implementation team to ensure a quick and customizable setup process, which takes about two weeks. Additionally, Ovid Support is there for you throughout the entire service lifecycle, available 24/7/365.



Source link

Continue Reading

Trending