Connect with us

AI Insights

How artificial intelligence is transforming medical imaging

Published

on


How artificial intelligence is transforming medical imaging

A decade ago, deep learning prototypes wowed conferences but rarely touched patients. By June 2025, 777 artificial intelligence-enabled devices had received Food and Drug Administration (FDA) clearance, and two-thirds of U.S. radiology departments used AI in some capacity. This rapid shift pairs radiologists’ pattern-recognition skills with machines that never tire, promising faster scans, sharper pictures, and earlier answers, Vivian Health reports.

FDA Approvals Mark AI’s Clinical Coming-of-Age

The FDA continuously updates its list of devices that utilize AI and machine learning (ML) technologies, which has shown exponential growth since 2018. Algorithms for stroke, breast cancer, and lung nodule detection dominate the list. AI/ML has become a tool that radiology departments and other healthcare areas nationwide utilize to improve patient care.

Because these products are regulated as software-as-a-medical-device (SaMD), vendors must prove safety, effectiveness and, often, a detailed plan for routine updates. The agency’s 2024 cross-center framework further streamlines the review process, encouraging AI innovators while protecting patients.

How AI Supports Patient Care

Slashes Scan Times and Dose

AI isn’t just for interpreting images. It’s also remaking how they’re acquired. Deep-learning reconstruction algorithms clarify low-dose CT or limited-echo MRI data so sharply that technologists can cut radiation or magnet time without losing detail. These cuts help make these scans safer for patients and providers.

The National Institute of Biomedical Imaging and Bioengineering’s (NIBIB) informatics program funds teams refining reconstruction networks to preserve quantitative accuracy. Researchers at the Massachusetts Institute of Technology (MIT) took it a step further, releasing FeatUp. This model-agnostic method boosts spatial resolution within any vision network, making it easier to obtain submillimeter detail from standard scanners.

Ultrasound also benefits. The University of Wisconsin’s medical physics group pairs AI beamformers with point-of-care probes, bringing cardiology-grade clarity to handheld devices. Faster scans mean shorter breath-holds, happier patients, and more appointment slots each day. Patients notice the value even if they’ve never heard of algorithms.

Flags Urgent Cases

In busy trauma centers, thousands of cross-sectional images pour in each hour. AI triage tools watch in the background, pushing suspected hemorrhages or pulmonary embolisms to the top of a worklist so radiologists read them first. At the Radiology Society of North America (RSNA) 2024 sessions, one discussion focused on AI workload relief, including measurable drops in turnaround time for critical findings and a tangible decrease in radiologist burnout.

However, Harvard Medical School researchers caution that human-algorithm teamwork doesn’t work for every radiologist. While some radiologists accept helpful suggestions, others are distracted by them. Its multisite study indicated that training and interface design mattered as much as model accuracy, with integrations tailored for a clinician and AI technology partnership to get the desired result.

Turns Raw Pixels into Precise Diagnoses

The FDA cleared the first AI imaging tool capable of predicting a woman’s breast cancer risk over the next five years using a standard 2D mammogram. Unlike current risk models that rely on a patient’s family history of breast cancer and age, the Clarity Breast platform uses advanced AI to analyze the actual mammogram to look for subtle patterns in the breast tissue that could indicate the development of breast cancer in the future.

These mammograms may look perfectly normal to the human eye, but AI analysis can provide advanced warning that could make a big difference. Armed with this information, patients can take a more proactive approach to their cancer screenings and follow-up care before actual signs of the disease even appear. By moving beyond detection to prevention, AI can help healthcare professionals save more lives. The Clarity Breast system is anticipated to launch in late 2025.

Extracts More Data with Fewer Biopsies

The human eye mostly sees shades of gray within each 3D pixel or voxel in a CT or MRI scan, but AI can measure dozens of properties inside every voxel. These measurements include how bright it is, whether the surface appears rough or smooth, how irregular its shape appears, and many other factors. Collectively, the thousands of measurements AI compiles are called radiomic features.

The National Cancer Institute’s (NCI) Quantitative Imaging Network explains that radiomics uses AI to automatically quantify radiographic characteristics of the tumor phenotype, turning pictures into objective data points that clinicians can analyze much like lab values. Why does this matter?

  • Fewer needle biopsies for patients: Because radiomic patterns often mirror underlying gene mutations or treatment response, researchers funded by NCI’s Early Detection Research Network are validating image-based “virtual biopsies” that let oncologists gauge how a tumor is behaving without repeatedly sampling tissue.
  • Earlier, more personal treatment choices: By comparing a new scan’s feature set with thousands stored in the NCI’s Imaging Data Commons, algorithms can suggest whether a cancer is aggressive or likely to respond to a specific drug, helping doctors tailor therapy sooner and sparing patients ineffective regimens.
  • Objective progress reports for radiologists: Instead of eyeballing size changes, radiologists can track precise texture or shape shifts from visit to visit. Stable numbers signal a treatment that’s working, while sudden jumps warn the care team to adjust.

In short, radiomics turns medical images into quantifiable biomarkers that doctors can follow, such as blood tests, providing patients with gentler care and radiologists with sharper decision-making tools.

Implementation and Concerns

Integrating AI into the Imaging Workflow

Beyond detection, new platforms draft structured reports, check follow-up guidelines and pre-populate key images. RSNA’s Radiology journal details large-language-model (LLM) assistants that convert dictation into error-free prose and auto-insert impression bullet points.

Some studies indicate that implementing AI/LLM can reduce errors and cut reporting time by up to 30%. Additionally, time saved doing mundane tasks, such as transcribing notes using AI dictation tools, has been shown to reduce clinician burnout.

Due to the high number of commercially available tools, medical professionals and departments should make comprehensive comparisons before implementing any AI tool into the imaging workflow. Compare features, accuracy, validation cohorts for each model, regulatory status, and other vital aspects to ensure you’re purchasing a reputable product that will improve your department’s performance.

Building Trust with Transparent Algorithms

Massive datasets of CT scans, X-rays, and MRI scans created to train AI tools to become more proficient at analyzing and making predictions could help doctors make earlier diagnoses and develop more effective treatment plans for better patient outcomes. However, AI can magnify inequity if trained on biased data. NIBIB stresses that models must perform equally across demographic groups.

MIT scientists also reported that networks most accurate at predicting race or gender from X-rays also displayed the widest gaps in fairness, potentially leading to inaccurate results for women and people of color. These scientists urged caution when adding unlabeled web images to training sets. Transparent outputs encourage adoption and simplify error investigation.

Data Privacy and Cybersecurity Concerns

AI thrives on data volume, but the Health Insurance Portability and Accountability Act (HIPAA) and the General Data Protection Regulation (GDPR) set strict boundaries. Federated learning offers a compromise, sending algorithms to the data rather than data to the cloud to preserve data privacy.

The FDA’s 2024 guidance, particularly its finalized guidance on Predetermined Change Control Plans (PCCP) for medical devices, promotes a framework for managing AI-enabled medical devices that aligns with the principles of privacy-preserving pipelines. This framework emphasizes data management, documentation, and the need to demonstrate continued safety and effectiveness throughout the product lifecycle.

Hospitals harden their networks because an AI algorithm can only be trusted if its inputs are authentic, meaning they’re uncorrupted and not tampered with internally or externally. Zero-trust architectures and real-time Digital Imaging and Communications in Medicine (DICOM) hashing are now appearing in many Requests for Proposals (RFPs) for AI-enabled Picture Archiving and Communication Systems (PACS) to ensure diagnostic accuracy, protect patient data, and build a secure healthcare ecosystem.

What’s Next in Artificial Intelligence

Foundation Models and Multimodal AI Tools

Large vision-language models pre-trained on billions of clinical images promise one network for every modality. Harvard recently unveiled Clinical Histopathology Imaging Evaluation Foundation (CHIEF), a foundation model that reads whole-slide pathology images, detects multiple cancers, and predicts survival with nearly 94% accuracy. CHIEF outperforms other task-specific AI methods by up to 36%.

Similar work integrates CT volumes with radiology reports, lab data, and genetic profiles, advancing imaging toward an integrated digital twin of each patient. Generative models introduce new prospects in the study of rare diseases and the creation of cures. These models help overcome data deficiency by simulating rare diseases for research, augmenting small datasets, and creating photorealistic phantoms to test safety without exposing patients to radiation.

Education Must Keep Pace with Innovation

Training programs evolve so tomorrow’s radiologists write prompts as confidently as protocols. To help radiologists and other healthcare professionals stay aligned with the advances of AI in medicine, many colleges and universities offer courses specifically on this topic. Whether through graduate degrees, certification programs, or continuing education, you’ll find numerous pathways to ensure your healthcare education keeps pace with AI innovations.

A few examples of schools with AI in medicine training include:

Human Expertise Amplified, Not Replaced

AI already speeds scans, spots abnormalities, and drafts reports, but its most significant impact lies in freeing clinicians for nuanced decisions and patient conversations. While technical hurdles, such as bias, privacy issues, and interoperability, are legitimate concerns, collaborative regulation and open science address them head-on.

As foundation models mature and datasets grow more diverse, algorithms will shift medical imaging from pattern recognition to quantitative, predictive precision. Radiologists who embrace this partnership won’t be sidelined. Instead, they’ll lead a data-rich era where every image informs better care.

This story was produced by Vivian Health and reviewed and distributed by Stacker.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Insights

Apple's top executive in charge of artificial intelligence models, Ruoming Pang, is leaving for Meta – Bloomberg News – MarketScreener

Published

on



Apple’s top executive in charge of artificial intelligence models, Ruoming Pang, is leaving for Meta – Bloomberg News  MarketScreener



Source link

Continue Reading

AI Insights

Intro robotics students build AI-powered robot dogs from scratch

Published

on


Equipped with a starter robot hardware kit and cutting-edge lessons in artificial intelligence, students in CS 123: A Hands-On Introduction to Building AI-Enabled Robots are mastering the full spectrum of robotics – from motor control to machine learning. Now in its third year, the course has students build and enhance an adorable quadruped robot, Pupper, programming it to walk, navigate, respond to human commands, and perform a specialized task that they showcase in their final presentations.

The course, which evolved from an independent study project led by Stanford’s robotics club, is now taught by Karen Liu, professor of computer science in the School of Engineering, in addition to Jie Tan from Google DeepMind and Stuart Bowers from Apple and Hands-On Robotics. Throughout the 10-week course, students delve into core robotics concepts, such as movement and motor control, while connecting them to advanced AI topics.

“We believe that the best way to help and inspire students to become robotics experts is to have them build a robot from scratch,” Liu said. “That’s why we use this specific quadruped design. It’s the perfect introductory platform for beginners to dive into robotics, yet powerful enough to support the development of cutting-edge AI algorithms.”

What makes the course especially approachable is its low barrier to entry – students need only basic programming skills to get started. From there, the students build up the knowledge and confidence to tackle complex robotics and AI challenges.

Robot creation goes mainstream

Pupper evolved from Doggo, built by the Stanford Student Robotics club to offer people a way to create and design a four-legged robot on a budget. When the team saw the cute quadruped’s potential to make robotics both approachable and fun, they pitched the idea to Bowers, hoping to turn their passion project into a hands-on course for future roboticists.

“We wanted students who were still early enough in their education to explore and experience what we felt like the future of AI robotics was going to be,” Bowers said.

This current version of Pupper is more powerful and refined than its predecessors. It’s also irresistibly adorable and easier than ever for students to build and interact with.

“We’ve come a long way in making the hardware better and more capable,” said Ankush Kundan Dhawan, one of the first students to take the Pupper course in the fall of 2021 before becoming its head teaching assistant. “What really stuck with me was the passion that instructors had to help students get hands-on with real robots. That kind of dedication is very powerful.”

Code come to life

Building a Pupper from a starter hardware kit blends different types of engineering, including electrical work, hardware construction, coding, and machine learning. Some students even produced custom parts for their final Pupper projects. The course pairs weekly lectures with hands-on labs. Lab titles like Wiggle Your Big Toe and Do What I Say keep things playful while building real skills.

CS 123 students ready to show off their Pupper’s tricks. | Harry Gregory

Over the initial five weeks, students are taught the basics of robotics, including how motors work and how robots can move. In the next phase of the course, students add a layer of sophistication with AI. Using neural networks to improve how the robot walks, sees, and responds to the environment, they get a glimpse of state-of-the-art robotics in action. Many students also use AI in other ways for their final projects.

“We want them to actually train a neural network and control it,” Bowers said. “We want to see this code come to life.”

By the end of the quarter this spring, students were ready for their capstone project, called the “Dog and Pony Show,” where guests from NVIDIA and Google were present. Six teams had Pupper perform creative tasks – including navigating a maze and fighting a (pretend) fire with a water pick – surrounded by the best minds in the industry.

“At this point, students know all the essential foundations – locomotion, computer vision, language – and they can start combining them and developing state-of-the-art physical intelligence on Pupper,” Liu said.

“This course gives them an overview of all the key pieces,” said Tan. “By the end of the quarter, the Pupper that each student team builds and programs from scratch mirrors the technology used by cutting-edge research labs and industry teams today.”

All ready for the robotics boom

The instructors believe the field of AI robotics is still gaining momentum, and they’ve made sure the course stays current by integrating new lessons and technology advances nearly every quarter.

A water jet is mounted on this "firefighter" Pupper

This Pupper was mounted with a small water jet to put out a pretend fire. | Harry Gregory

Students have responded to the course with resounding enthusiasm and the instructors expect interest in robotics – at Stanford and in general – will continue to grow. They hope to be able to expand the course, and that the community they’ve fostered through CS 123 can contribute to this engaging and important discipline.

“The hope is that many CS 123 students will be inspired to become future innovators and leaders in this exciting, ever-changing field,” said Tan.

“We strongly believe that now is the time to make the integration of AI and robotics accessible to more students,” Bowers said. “And that effort starts here at Stanford and we hope to see it grow beyond campus, too.”



Source link

Continue Reading

AI Insights

Why Infuse Asset Management’s Q2 2025 Letter Signals a Shift to Artificial Intelligence and Cybersecurity Plays

Published

on


The rapid evolution of artificial intelligence (AI) and the escalating complexity of cybersecurity threats have positioned these sectors as the next frontier of investment opportunity. Infuse Asset Management’s Q2 2025 letter underscores this shift, emphasizing AI’s transformative potential and the urgent need for robust cybersecurity infrastructure to mitigate risks. Below, we dissect the macroeconomic forces, sector-specific tailwinds, and portfolio reallocation strategies investors should consider in this new paradigm.

The AI Uprising: Macro Drivers of a Paradigm Shift

The AI revolution is accelerating at a pace that dwarfs historical technological booms. Take ChatGPT, which reached 800 million weekly active users by April 2025—a milestone achieved in just two years. This breakneck adoption is straining existing cybersecurity frameworks, creating a critical gap between innovation and defense.

Meanwhile, the U.S.-China AI rivalry is fueling a global arms race. China’s industrial robot installations surged from 50,000 in 2014 to 290,000 in 2023, outpacing U.S. adoption. This competition isn’t just about economic dominance—it’s a geopolitical chess match where data sovereignty, espionage, and AI-driven cyberattacks now loom large. The concept of “Mutually Assured AI Malfunction (MAIM)” highlights how even a single vulnerability could destabilize critical systems, much like nuclear deterrence but with far less predictability.

Cybersecurity: The New Infrastructure for an AI World

As AI systems expand into physical domains—think autonomous taxis or industrial robots—so do their vulnerabilities. In San Francisco, autonomous taxi providers now command 27% market share, yet their software is a prime target for cyberattacks. The decline in AI inference costs (outpacing historical declines in electricity and memory) has made it cheaper to deploy AI, but it also lowers the barrier for malicious actors to weaponize it.


Tech giants are pouring capital into AI infrastructure—NVIDIA and Microsoft alone increased CapEx from $33 billion to $212 billion between 2014 and 2024. This influx creates a vast, interconnected attack surface. Investors should prioritize cybersecurity firms that specialize in quantum-resistant encryption, AI-driven threat detection, and real-time infrastructure protection.

The Human Element: Skills Gaps and Strategic Shifts

The demand for AI expertise is soaring, but the workforce is struggling to keep pace. U.S. AI-related IT job postings have surged 448% since 2018, while non-AI IT roles have declined by 9%. This bifurcation signals two realities:
1. Cybersecurity skills are now mission-critical for safeguarding AI systems.
2. Ethical AI development and governance are emerging as compliance priorities, particularly in regulated industries.

The data will likely show a stark divergence, reinforcing the need for investors to back training platforms and cybersecurity firms bridging this skills gap.

Portfolio Reallocation: Where to Deploy Capital

Infuse’s insights suggest three actionable strategies:

  1. Core Holdings in Cybersecurity Leaders:
    Target firms like CrowdStrike (CRWD) and Palo Alto Networks (PANW), which excel in AI-powered threat detection and endpoint security.

  2. Geopolitical Plays:
    Invest in companies addressing data sovereignty and cross-border compliance, such as Palantir (PLTR) or Cloudflare (NET), which offer hybrid cloud solutions.

  3. Emerging Sectors:
    Look to quantum computing security (e.g., Rigetti Computing (RGTI)) and AI governance platforms like DataRobot (NASDAQ: MGNI), which help enterprises audit and validate AI models.

The Bottom Line: AI’s Growth Requires a Security Foundation

The “productivity paradox” of AI—where speculative valuations outstrip tangible ROI—is real. Yet, cybersecurity is one area where returns are measurable: breaches cost companies millions, and defenses reduce risk. Investors should treat cybersecurity as the bedrock of their AI investments.

As Infuse’s letter implies, the next decade will belong to those who balance AI’s promise with ironclad security. Position portfolios accordingly.

JR Research



Source link

Continue Reading

Trending