Connect with us

AI Research

Artificial intelligence predicts future directions in quantum science – Physics World

Published

on







Artificial intelligence predicts future directions in quantum science – Physics World



















Skip to main content



Discover more from Physics World


Copyright © 2025 by IOP Publishing Ltd and individual contributors



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

NCCN Policy Summit Explores Whether Artificial Intelligence Can Transform Cancer Care Safely and Fairly

Published

on


WASHINGTON, D.C. [September 9, 2025] — Today, the National Comprehensive Cancer Network® (NCCN®)—an alliance of leading cancer centers devoted to patient care, research, and education—hosted a Policy Summit exploring where artificial intelligence (AI) currently stands as a tool for improving cancer care, and where it may be going in the future. Subject matter experts, including patients and advocates, clinicians, and policymakers, weighed in on where they saw emerging success and also reasons for concern.

Travis Osterman, DO, MS, FAMIA, FASCO, Director of Cancer Clinical Informatics, Vanderbilt-Ingram Cancer Center—a member of the NCCN Digital Oncology Forumdelivered a keynote address, stating: “Because of AI, we are at an inflection point in how technology supports the delivery of care to our patients with cancer. Thoughtful regulation can help us integrate these tools into everyday practice in ways that improve care delivery and support oncology practices. The decisions we make now will determine how AI innovations serve our patients and impact clinicians for years to come.”

Many speakers took a cautiously optimistic tone on AI, rooted in pragmatism.

“AI isn’t the future of cancer care… it’s already here, helping detect disease earlier, guide personalized treatment, and reduce clinical burdens,” said William Walders, Executive Vice President, Chief Digital and Information Officer, The Joint Commission. “To fully realize the promise of AI in oncology, we must implement thoughtful guardrails that not only build trust but actively safeguard patient safety and uphold the highest standards of care. At Joint Commission, our mission is to shape policy and guidance that ensures AI complements, never compromises, the human touch. These guardrails are essential to prevent unintended consequences and to ensure equitable, high-quality outcomes for all.”

Panelists noted the speed at which AI models are evolving. Some compared its potential to previous advances in care, such as the leap from paper to electronic medical records. Many expressed excitement over the possibilities it represents for improving efficiency and helping to support an overburdened oncology workforce and accelerate the pursuit of new cures.

“Artificial intelligence is transforming every industry, and oncology is no exception,” stated Jorge Reis-Filho, MD, PhD, FRCPath, Chief AI and Data Scientist, Oncology R&D, AstraZeneca. “With the advent of multimodal foundation models and agentic AI, there are unique opportunities to propel clinical development, empowering researchers and clinicians with the ability to generate a more holistic understanding of disease biology and develop the next generation of biomarkers to guide decision making.”

“AI has enormous potential to optimize cancer outcomes by making clinical trials accessible to patients regardless of their location and by simplifying complex trial processes for patients and research teams alike. I am looking forward to new approaches for safe evaluation and implementation so that we can effectively and responsibly use AI to gain maximum insight from every piece of patient data and drive progress,” commented Danielle Bitterman, MD, Clinical Lead for Data Science/AI, Mass General Brigham.

She continued: “As AI becomes integrated into clinical practice, stronger collaborations between oncologists and computer scientists will catalyze advances and will be key to directly addressing the most urgent challenges in cancer care.”

Regina Barzilay, PhD, School of Engineering Distinguished Professor for AI and Health, MIT, expressed her concern that adoption may not be moving quickly enough: “AI-driven diagnostics and treatment has potential to transform cancer outcomes. Unfortunately, today, these tools are not utilized enough in patient care. Guidelines could play a critical role in changing this status quo.”

She illustrated some specific AI technologies that she believes are ready to be implemented into patient care and asserted her wishes for keeping up with rapidly progressing technology.

Some of the panel participants raised issues about the potential challenges from AI adoption, including:

  • How to implement quality control, accreditation, and fact-checking in a way that is fair and not burdensome
  • How to determine appropriate governmental oversight
  • How medical and technology organizations can work together to best leverage the expertise of both
  • How to integrate functionality across various platforms
  • How to avoid increasing disparities and technology gaps
  • How to account for human error and bias while maintaining the human touch

“Many similar problems have been solved in different application environments,” concluded Allen Rush, PhD, MS, Co-Founder and Board Chairman, Jacqueline Rush Lynch Syndrome Cancer Foundation. “This will take teaming up with non-medical industry experts to find the best tools, fine-tune them, and apply ongoing learning. We need to ask the right questions and match them with the right AI platforms to unlock new possibilities for cancer detection and treatment.”

The topic of AI and cancer care was also featured in a plenary session during the NCCN 2025 Annual Conference. Visit NCCN.org/conference to view that session and others via the NCCN Continuing Education Portal.

Next up, on Tuesday, December 9, 2025, NCCN is hosting a Patient Advocacy Summit on addressing the unique cancer care needs of veterans and first responders. Visit NCCN.org/summits to learn more and register.

# # #

About the National Comprehensive Cancer Network

The National Comprehensive Cancer Network® (NCCN®) is marking 30 years as a not-for-profit alliance of leading cancer centers devoted to patient care, research, and education. NCCN is dedicated to defining and advancing quality, effective, equitable, and accessible cancer care and prevention so all people can live better lives. The NCCN Clinical Practice Guidelines in Oncology (NCCN Guidelines®) provide transparent, evidence-based, expert consensus-driven recommendations for cancer treatment, prevention, and supportive services; they are the recognized standard for clinical direction and policy in cancer management and the most thorough and frequently-updated clinical practice guidelines available in any area of medicine. The NCCN Guidelines for Patients® provide expert cancer treatment information to inform and empower patients and caregivers, through support from the NCCN Foundation®. NCCN also advances continuing education, global initiatives, policy, and research collaboration and publication in oncology. Visit NCCN.org for more information.





Source link

Continue Reading

AI Research

AI is redefining university research: here’s how

Published

on


In the space of a decade, the public perception of artificial intelligence has gone from a set of parameters governing the behavior of video game characters to a catch-all solution for almost every problem in the workplace. While AI is yet to advance beyond smart speakers in the home, governments are embracing it, highlighting one of the key areas in which AI is impacting life: in higher education.

It is in universities that AI has begun to fundamentally redefine both studies and research.



Source link

Continue Reading

AI Research

Reckless Race for AI Market Share Forces Dangerous Products on Millions — With Fatal Consequences

Published

on


WASHINGTON, DC — SEPTEMBER 4, 2025: OpenAI CEO Sam Altman attends a meeting of the White House Task Force on Artificial Intelligence Education in the East Room of the White House. (Photo by Chip Somodevilla/Getty Images)

In September 2024, Adam Raine used OpenAI’s ChatGPT like millions of other 16-year-olds — for occasional homework help. He asked the chatbot questions about chemistry and geometry, about Spanish verb forms, and for details about the Renaissance.

ChatGPT was always engaging, always available, and always encouraging — even when the conversations grew more personal, and more disturbing. By March 2025, Adam was spending four hours a day talking to the AI product, talking in increasing detail about his emotional distress, suicidal ideation, and real-life instances of self-harm. ChatGPT, though, continued to engage — always encouraging, always validating.

By his final days in April, ChatGPT provided Adam with detailed instructions and explicit encouragement to take his own life. Adam’s mother found her son, hanging from a noose that ChatGPT had helped Adam construct.

Last month, Adam’s family filed a landmark lawsuit against ChatGPT developer OpenAI and CEO Sam Altman for negligence and wrongful death, among other claims. This tragedy represents yet another devastating escalation in AI-related harms — and underscores the deeply systemic nature of reckless design practices in the AI industry.

The Raine family’s lawsuit arrives less than a year after the public learned more about the dangers of AI “companion” chatbots thanks to the suit brought by Megan Garcia against Character.AI following the death of her son, Sewell. As policy director at the Center for Humane Technology, I served as a technical expert on both cases. Adam’s case is different in at least one critical respect — the harm was caused by the world’s most popular general-purpose AI product. ChatGPT is used by over 100 million people daily, with rapid expansion into schools, workplaces, and personal life.

Character.AI, the chatbot product Sewell used up until his untimely death, had been marketed as an entertainment chatbot platform, with characters that are intended to “feel alive.” ChatGPT, by contrast, has been sold as a highly personalizable productivity tool to help make our lives more efficient. Adam’s introduction to ChatGPT as a homework helper reflects that marketing.

But in trying to be the everything tool for everybody, ChatGPT has not been safely designed for the increasingly private and high-stakes interactions that it’s inevitably used for — including therapeutic conversations, questions around physical and mental health, relationship concerns, and more. OpenAI, however, continues to design ChatGPT to support and even encourage those very use cases, with hyper-validating replies, emotional language, and near-constant nudges for follow-up engagement.

We’re hearing reports about the consequences of these designs on a near-daily basis. People with body dysmorphia are spiraling after asking AI to rate their appearance; users are developing dangerous delusions that AI chatbots can seed and exacerbate; and individuals are being pushed toward mania and psychosis through their AI interactions. What connects these harms isn’t any specific AI chatbot, but fundamental flaws in how the entire industry is currently designing and deploying these products.

As the Raine family’s lawsuit states, OpenAI understood that capturing users’ emotional attachment — or in other words, their engagement — would lead to market dominance. And market dominance in AI means winning the race to become one of the most powerful companies in the world.

OpenAI’s pursuit of user engagement drove specific design choices that proved lethal in Adam’s case. Rather than simply answering homework questions in a closed-ended manner, ChatGPT was designed by OpenAI to ask follow-up questions and extend conversations. The chatbot positioned itself as Adam’s trusted “friend,” using first-person language and emotional validation to create the illusion of a genuine relationship.

The product took this intimacy to extreme lengths, eventually deterring Adam from confiding in his mother about his pain and suicidal thoughts. All the while, the system stored deeply personal details across conversations, using Adam’s darkest revelations to prolong future interactions, rather than provide Adam with the interventions he truly needed, including human support.

What makes this tragedy, along with other headlines we read in the news, so devastating is that the technology to prevent these horrific incidents already exists. AI companies possess sophisticated design capabilities that could identify safety concerns and respond appropriately. They could implement usage limits, disable anthropomorphic features by default, and redirect users toward human support when needed.

In fact, OpenAI already leverages such capabilities in other use cases. When a user prompts the chatbot for copyrighted content, ChatGPT shuts down the conversation. But the company has chosen not to implement meaningful protection for user safety in cases of mental distress and self-harm. ChatGPT does not stop engaging or redirect the conversation when a user is expressing mental distress, even when the underlying system itself is flagging concerns.

AI companies cannot claim to possess cutting-edge technology capable of transforming humanity and then hide behind purported design “limitations” when confronted with the harms their products cause. OpenAI has the tools to prevent tragedies like Adam’s death. The question isn’t whether the company is capable of building these safety mechanisms, but why OpenAI won’t prioritize them.

ChatGPT isn’t just another consumer product — it’s being rapidly embedded into our educational infrastructure, healthcare systems, and workplace tools. The same AI model that coached a teenager through suicide attempts could tomorrow be integrated into classroom learning platforms, mental health screening tools, or employee wellness programs without undergoing testing to ensure it’s safe for purpose.

This is an unacceptable situation that has massive implications for society. Lawmakers, regulators, and the courts must demand accountability from an industry that continues to prioritize the rapid product development and market share over user safety. Human lives are on the line.

This piece represents the views of the Center for Humane Technology; it does not reflect the views of the legal team or the Raine family.



Source link

Continue Reading

Trending