Connect with us

AI Research

NCCN Policy Summit Examines the Potential of Artificial Intelligence to

Published

on


In a landmark gathering held in Washington, D.C., on September 9, 2025, the National Comprehensive Cancer Network® (NCCN®) convened a forward-looking Policy Summit focused exclusively on the burgeoning role of artificial intelligence (AI) in oncology. This summit, hosted by one of the world’s foremost coalitions of cancer centers committed to advancing patient care, research, and education, assembled a distinguished consortium of experts—spanning oncologists, data scientists, patient advocates, and healthcare policymakers—to dissect the current capabilities, ethical challenges, and transformative potential AI holds for cancer treatment and management.

The core of the discussion revolved around the accelerating integration of AI-driven tools within oncology practices and the critical juncture at which the medical community finds itself. Dr. Travis Osterman, an eminent figure in cancer clinical informatics at Vanderbilt-Ingram Cancer Center and a key voice in the NCCN Digital Oncology Forum, eloquently positioned this moment as an inflection point. According to him, the timely establishment of regulatory frameworks and thoughtfully crafted policy guardrails will be decisive in ensuring that AI enhances rather than disrupts the clinical workflow, patient safety, and care efficacy. Osterman underscored that the decisions made today will set the trajectory for AI’s sustainable incorporation into oncological care paradigms for years to come.

Despite the cautious optimism permeating the summit, leading authorities emphasized a pragmatic approach to adoption. William Walders, Executive Vice President and Chief Digital and Information Officer at The Joint Commission, articulated the present reality: AI technologies are neither speculative nor distant prospects but active components in contemporary oncology. Tools powered by machine learning are already instrumental in early disease detection, guiding treatment personalization, and alleviating administrative burdens on clinicians. Walders identified a critical necessity—designing safeguards and trust-building mechanisms that protect patients and reinforce the humanistic core of oncological care, ensuring that AI functions as a complementary force rather than a replacement for human judgment.

The rapid pace of AI model development was a recurring theme, with speakers drawing parallels to revolutionary milestones in medical history. The shift from paper-based to electronic medical records (EMRs) was invoked as a historical analogue, exemplifying how profound technological shifts can both disrupt and enhance clinical workflows. Summit participants conveyed palpable excitement for AI’s promise—not only in streamlining clinical operations but also in addressing the pressing crisis of workforce shortages in oncology and accelerating the pipeline of novel therapeutic discoveries.

Dr. Jorge Reis-Filho, Chief AI and Data Scientist at AstraZeneca’s Oncology R&D division, emphasized the unprecedented opportunities enabled by recent advances in multimodal foundation models and agentic AI. Such models, capable of synthesizing diverse data streams—from genomic sequences to imaging and clinical records—hold the potential to revolutionize biomarker discovery and refine the biological understanding of malignancies. This integrative, multi-dimensional data analysis could markedly improve precision oncology, tailoring interventions to the unique molecular signatures of individual tumors and enhancing therapeutic outcomes.

Clinical trial innovation also emerged as a pivotal area poised for AI-driven disruption. According to Dr. Danielle Bitterman of Mass General Brigham, AI’s ability to dismantle geographical and logistical barriers could democratize clinical trial access, extending life-saving investigational therapies to patients irrespective of their physical proximity to research centers. Moreover, the automation and simplification of complex trial protocols, powered by AI decision-support systems, promise to reduce trial inefficiencies and improve data integrity, thus hastening the translational journey from bench to bedside.

The interdisciplinary nature of AI’s integration into oncology was a key focal point, with calls for strengthened collaborations between oncologists and computer scientists. This partnership is anticipated to catalyze advances by ensuring that AI tools are pragmatically aligned with clinical realities and patient-centered objectives. Such synergy is viewed as indispensable for overcoming technical hurdles and ethical concerns alike, facilitating co-design processes that marry computational innovation with frontline clinical insight.

MIT’s Regina Barzilay, a prominent AI and health engineering professor, voiced a note of urgency amid the excitement. She warned that the actual uptake of AI-driven diagnostics and therapeutics lags behind technological capabilities. Barzilay advocated explicitly for the development and implementation of clinical guidelines that would mandate or incentivize the use of validated AI tools, thus accelerating their translation into routine patient care and overcoming institutional inertia and skepticism.

While the enthusiasm for AI’s potential was palpable, participants did not shy away from less optimistic perspectives. Significant challenges remain in implementing quality control and accreditation processes for AI algorithms in a manner that is rigorous yet not prohibitively burdensome. Furthermore, consensus on appropriate governmental and regulatory oversight remains elusive, creating a landscape of uncertainty that may stifle innovation or, contrarily, risk accelerating adoption without adequate safeguards.

The summit also highlighted the importance of fostering collaboration between medical practitioners and technology developers to optimize AI deployment. There is widespread recognition that neither domain can succeed in isolation. Successful AI applications hinge on intricate, real-world datasets and clinical insight, balanced with robust algorithmic validation and transparent, explainable models that clinicians trust and understand.

Interoperability was another pressing topic, as AI’s benefits can be undermined without seamless integration across heterogeneous healthcare IT systems. Fragmented platforms, inconsistent data standards, and siloed information flow impede AI’s ability to provide comprehensive decision support, reinforcing the need for unified frameworks and data-sharing protocols.

Equity considerations received significant attention. Summit attendees expressed concern that AI deployment risks exacerbating existing disparities in cancer care, particularly among under-resourced populations. Ensuing technology gaps within healthcare systems and patient communities could widen, unless deliberate strategies are undertaken to ensure universal access, culturally competent design, and bias mitigation within AI algorithms.

Moreover, the essential human element in oncology care—the nuanced, empathetic clinician-patient relationship—must be preserved. AI systems, while powerful, are vulnerable to errors, misinterpretations, and intrinsic biases inherent in training datasets. Sustaining the human touch remains paramount, and AI must be positioned as an augmentative tool that supports, rather than supplants, clinical expertise and judgment.

Allen Rush, co-founder of the Jacqueline Rush Lynch Syndrome Cancer Foundation, encapsulated the summit’s consensus by emphasizing the need to look beyond medical silos. He advocated for partnerships leveraging expertise from non-medical industries, particularly those with deep experience in AI and adaptive systems. By “teaming up” to co-develop and fine-tune AI applications, the oncology community could unlock unprecedented possibilities in early cancer detection and personalized treatment.

The Policy Summit is part of a broader NCCN effort to promulgate dialogue and education around AI’s role in oncology, with related sessions conducted during the NCCN 2025 Annual Conference. Upcoming events, such as the December 2025 Patient Advocacy Summit focusing on veterans and first responders, continue this momentum.

As AI continues its rapid evolution, the oncology community stands at a crossroads. The coming years will be critical in shaping a future where machine intelligence complements human compassion, improving cancer outcomes through precision, efficiency, and equity. The commitment demonstrated at this summit signals a readiness to navigate technical challenges and ethical considerations alike, ensuring that AI’s integration into cancer care is both responsible and revolutionary.

Subject of Research: Artificial Intelligence in Cancer Care and Oncology Policy

Article Title: NCCN Oncology Policy Summit Explores Cutting-Edge AI Innovations Set to Transform Cancer Care

News Publication Date: September 9, 2025

Web References:
https://www.nccn.org/business-policy/policy-and-advocacy-program/oncology-policy-summits
https://www.nccn.org/conference

Image Credits: NCCN

Keywords: Artificial intelligence, Generative AI, Machine learning, Electronic medical records, Medical technology, Cancer policy, Cancer treatments, Oncology, Cancer, Cancer screening

Tags: AI integration in clinical workflowsAI-driven cancer treatmentartificial intelligence in oncologyethical challenges in AI healthcarefuture of cancer management with AIhealthcare policymakers and AINCCN Policy Summit 2025oncology practice transformationpatient advocates in oncologypatient safety in cancer careregulatory frameworks for AI in medicinesustainable AI incorporation in healthcare



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

Spotlab.ai hiring AI research scientist for multimodal diagnostics and global health

Published

on


In a LinkedIn post, Miguel Luengo-Oroz, co-founder and CEO of Spotlab.ai, confirmed the company is hiring an Artificial Intelligence Research Scientist. The role is aimed at early career researchers, postdoctoral candidates, and recent PhD graduates in AI.

Luengo-Oroz writes: “Are you a young independent researcher, postdoc, just finished your PhD (or on the way there) in AI and wondering what’s next? If you’re curious, ready to tackle tough scientific and technical challenges, and want to build AI for something that matters, this might be for you.”

Spotlab.ai targets diagnostics role with new hire

The position will focus on building and deploying multimodal AI solutions for diagnostics and biopharma research. Applications include blood cancers and neglected tropical diseases.

The scientist will be expected to organize and prepare biomedical datasets, train and test AI models, and deploy algorithms in real-world conditions. The job description highlights interaction with medical specialists and product managers, as well as drafting technical documentation. Scientific publications are a priority, with the candidate expected to contribute across the research cycle from experiment planning to peer review.

Spotlab.ai is looking for candidates with experience in areas such as biomedical image processing, computer vision, NLP, video processing, and large language models. Proficiency in Python and deep learning frameworks including TensorFlow, Keras, and PyTorch is required, with GPU programming experience considered an advantage.

Company positions itself in global health AI

Spotlab.ai develops multimodal AI for diagnostics and biopharma research, with projects addressing gaps in hematology, infectious diseases, and neglected tropical diseases. The Madrid-based startup team combines developers, engineers, doctors, and business managers, with an emphasis on gender parity and collaboration across disciplines.

CEO highlights global mission

Alongside the job listing, Luengo-Oroz underscored the company’s broader mission. A former Chief Data Scientist at the United Nations, he has worked on technology strategies in areas ranging from food security to epidemics and conflict prevention. He is also the inventor of MalariaSpot.org, a collective intelligence videogame for malaria diagnosis.

Luengo-Oroz writes: “Take the driver’s seat of our train (not just a minion) at key stages of the journey, designing AI systems and doing science at Champions League level from Madrid.”



Source link

Continue Reading

AI Research

YARBROUGH: A semi-intelligent look at artificial intelligence – Rockdale Citizen

Published

on



YARBROUGH: A semi-intelligent look at artificial intelligence  Rockdale Citizen



Source link

Continue Reading

AI Research

Rice University creative writing course introduced Artificial Intelligence, AI

Published

on


Ian Schimmel teaches the new AI fiction course. The course invites writers to incorporate or resist the influence of AI in creative writing.

Courtesy Brandi Smith

By
Abigail Chiu
   
9/9/25 10:29pm

Rice is bringing generative artificial intelligence into the creative writing world with this fall’s new course, “ENGL 306: AI Fictions.” Ian Schimmel, an associate teaching professor in the English and creative writing department, said he teaches the course to help students think critically about technology and consider the ways that AI models could be used in the creative processes of fiction writing.

The course is structured for any level of writer and also includes space to both incorporate and resist the influence of AI, according to its description. 

“In this class, we never sit down with ChatGPT and tell it to write us a story and that’s that,” Schimmel wrote in an email to the Thresher. “We don’t use it to speed up the artistic process, either. Instead, we think about how to incorporate it in ways that might expand our thinking.”



Schimmel said he was stunned by the capabilities of ChatGPT when it was initially released in 2022, wondering if it truly possessed the ability to write. He said he found that the topic generated more questions than answers. 

The next logical step, for Schimmel, was to create a course centered on exploring the complexities of AI and fiction writing, with assigned readings ranging from New York Times opinion pieces critical of its usage to an AI-generated poetry collection.  

Schimmel said both students and faculty share concerns about how AI can help or harm academic progress and potentially cripple human creativity.

“Classes that engage students with AI might be some of the best ways to learn about what these systems can and cannot do,” Schimmel wrote. “There are so many things that AI is terrible at and incapable of. Seeing that firsthand is empowering. Whenever it hallucinates, glitches or makes you frustrated, you suddenly remember: ‘Oh right — this is a machine. This is nothing like me.”

“Fear is intrinsic to anything that shakes industry like AI is doing,” Robert Gray, a Brown College senior, wrote in an email to the Thresher. “I am taking this class so that I can immerse myself in that fear and learn how to navigate these new industrial landscapes.”

The course approaches AI from a fluid perspective that evolves as the class reads and writes more with the technology, Schimmel said. Their answers to the complex ethical questions surrounding AI usage evolve with this.

“At its core, the technology is fundamentally unethical,” Schimmel wrote. “It was developed and enhanced, without permission, on copyrighted text and personal data and without regard for the environment. So in that failed historical context, the question becomes: what do we do now? Paradoxically, the best way for us to formulate and evidence arguments against this technology might be to get to know it on a deep and personal level.”

Generative AI is often criticized for its ethicality, such as the energy output and water demanded for its data centers to function or how the models are trained based on data sets of existing copyrighted works

Amazon and Google-backed Anthropic recently settled a class-action lawsuit with a group of U.S. authors who accused the company of using millions of pirated books to train its Claude chatbot to respond to human prompts.

With the assistance of AI, students will be able to attempt large-scale projects that typically would not be possible within a single semester, according to the course overview. AI will accelerate the writing process for drafting a book outline, and students can “collaborate” with AI to write the opening chapters of a novel for NaNoWriMo, a worldwide writing event held every November where participants would produce a 50,000-word first draft of a novel.

NaNoWriMo, short for National Novel Writing Month, announced its closing after more than 20 years in spring 2025. It received widespread press coverage for a statement released in 2024 that said condemnation of AI in writing “has classist and ableist undertones.” Many authors spoke out against the perceived endorsement of using generative AI for writing and the implication that disabled writers would require AI to produce work.

Each weekly class involves experimentation in dialogues and writing sessions with ChatGPT, with Schimmel and his students acknowledging the unknown and unexplored within AI and especially the visual and literary arts. Aspects of AI, from creative copyrights to excessive water usage to its accuracy as an editor, were discussed in one Friday session in the Wiess College classroom.

“We’re always better off when we pay attention to our attention. If there’s a topic (or tech) that creates worry, or upset, or raises difficult questions, then that’s a subject that we should pursue,” Schimmel wrote. “It’s in those undefined, sometimes uncomfortable places where we humans do our best, most important learning.”






Source link

Continue Reading

Trending