AI Research
Mental Health and Breast Cancer and the Transforming Role of Artificial Intelligence

From virtual counselors that can hear depression creeping into a person’s voice to smart watches that can detect stress, generative artificial intelligence (AI) is poised to revolutionize mental health care for patients with breast cancer, taking care of the psychosocial needs of patients
Psychosocial oncology is a medical specialty that focuses on the emotional, psychological, social, and spiritual needs of individuals affected by cancer as well as their loved ones.[1] In breast cancer, pychosocial issues is complex and may include a variety of issues, including a distorted body image, low self-esteem, and sexual problems. These issues are often along linked to psychological symptoms such as anxiety, depression, anger, and worries about the future or death. However, the observed psychosocial problems experienced during breast cancer not only affect patients, they also impacts the other family members, making disease management complex and difficult. [2]
How AI can help
In a new paper published in AI in Precision Oncology, J. Kim Penberthy, PhD, a clinical psychologist at UVA Health and UVA Cancer Center, and colleagues detail the many ways artificial intelligence could help ensure patients receive the support they need. AI, they say, can identify patients at risk for mental-health struggles, get them treatment earlier, provide continuous psychological monitoring and even perform personalized interventions tailored to the individual.[3]
But that’s just the tip of the iceberg. AI, the researchers say, can overcome some of the biggest barriers patients face to getting mental-health support by expanding options beyond clinic walls and delivering care exactly where and when it’s needed – even to rural areas where patients lack local mental-health treatment options.
Soon, doctors may combine multiple AI technologies to provide patients a “holistic, interactive treatment experience” that ensures the mental-health support is every bit as good as the care for the cancer itself, the UVA researchers write.
Advertisement #3
“AI can help us notice when a patient is struggling and get them the right support faster,” Penberthy explained.
“This technology is moving quickly, and it’s exciting to see how soon it could make a real difference in people’s live,” she added.
Breast Cancer and Mental Health
Breast cancer is the most common cancer in women, with 2.3 million new diagnoses each year. Up to half of those patients will go on to experience anxiety, depression or post-traumatic stress disorder (PTSD). While there have been great advances in how we treat breast cancer, mental-health support for these patients has lagged behind, the UVA researchers note.
“Mental health care is a lifeline for women with breast cancer,” Penberthy said.
“Up to half experience anxiety or depression, and without support, treatment and quality of life can suffer. AI can help spot distress early and connect women to the care they need,” she added.
A crucial role
The co-authors envision a future – a near future – where AI plays a vast and crucial role in supporting patients’ mental health. The technology, they say, should not replace clinicians and care providers, but instead can extend providers’ reach and presence. By monitoring patients in real time, for example, AI could alert doctors that a patient may be struggling or slipping into depression.
Similarly, AI-powered chatbots and telepsychiatry platforms offer “scalable, cost-effective solutions” to increase access to psychological care, the researchers write. These advanced AI chatbots go far beyond the simple conversations often associated with their ilk. Instead of just responding to straightforward questions, the electronic entities can provide on-demand emotional support, suggest coping mechanisms, detail relaxation techniques and offer continuous psychological support even when therapists are unavailable.
AI, the researchers write, has tremendous potential to improve “accessibility, personalization, efficiency and cost-effectiveness” of mental health care for patients with breast cancer. But they caution that the technology also brings challenges and ethical considerations.
For example, AI can be a powerful tool to analyze mental health data, but this requires strict safeguards to protect patient privacy. Similarly, studies have shown that AI can “underperform” for patients from minority or underrepresented backgrounds, potentially contributing to care disparities, the authors write.
Those are the type of things that doctors and researchers will have to keep in mind as they explore the potential of AI, Penberthy and her collaborators say. But they are excited for what the future holds, noting that AI has “immense” potential for improving mental health support for patients with breast cancer.
“We’re just beginning to scratch the surface of AI’s potential in health care and the positive impact AI will have in our lives,” noted co-author David Penberthy, MD, MBA.
“I’m incredibly optimistic about what the future will bring!” he concluded.
Reference
[1] Bires J, Franklin E, Nelson K, Bonesteel K, Flora D. Exploring the Intersection of Artificial Intelligence and Psychosocial Oncology: Enhancing Care in the Digital Age
AI in Precision Oncology; DOI: 10.1089/aipo.2025.0007 Published Online: 13 June 2025
[2] Kaçmaz, E.D. (2024). Psychosocial Perspective in Breast Cancer: From Diagnosis to Survivorship. In: Bakar, Y., Tuğral, A. (eds) Managing Side Effects of Breast Cancer Treatment. Springer, Cham. https://doi.org/10.1007/978-3-031-75480-7_22
[3] Penberthy JK, Penberthy DR, Bires J. A Narrative Review of the Role of Artificial Intelligence in Supporting the Mental Health of Patients with Breast Cancer. AI in Precision Oncology DOI: 10.1177/2993091X251361147 Published Online: 21 July 2025.
Featured Image courtesy © 2017 – 2025 Fotolia/Adobe. Used with permission.
DOI
Advertisement #5
AI Research
UCLA Researchers Enable Paralyzed Patients to Control Robots with Thoughts Using AI – CHOSUNBIZ – Chosun Biz
AI Research
Hackers exploit hidden prompts in AI images, researchers warn

Cybersecurity firm Trail of Bits has revealed a technique that embeds malicious prompts into images processed by large language models (LLMs). The method exploits how AI platforms compress and downscale images for efficiency. While the original files appear harmless, the resizing process introduces visual artifacts that expose concealed instructions, which the model interprets as legitimate user input.
In tests, the researchers demonstrated that such manipulated images could direct AI systems to perform unauthorized actions. One example showed Google Calendar data being siphoned to an external email address without the user’s knowledge. Platforms affected in the trials included Google’s Gemini CLI, Vertex AI Studio, Google Assistant on Android, and Gemini’s web interface.
Read More: Meta curbs AI flirty chats, self-harm talk with teens
The approach builds on earlier academic work from TU Braunschweig in Germany, which identified image scaling as a potential attack surface in machine learning. Trail of Bits expanded on this research, creating “Anamorpher,” an open-source tool that generates malicious images using interpolation techniques such as nearest neighbor, bilinear, and bicubic resampling.
From the user’s perspective, nothing unusual occurs when such an image is uploaded. Yet behind the scenes, the AI system executes hidden commands alongside normal prompts, raising serious concerns about data security and identity theft. Because multimodal models often integrate with calendars, messaging, and workflow tools, the risks extend into sensitive personal and professional domains.
Also Read: Nvidia CEO Jensen Huang says AI boom far from over
Traditional defenses such as firewalls cannot easily detect this type of manipulation. The researchers recommend a combination of layered security, previewing downscaled images, restricting input dimensions, and requiring explicit confirmation for sensitive operations.
“The strongest defense is to implement secure design patterns and systematic safeguards that limit prompt injection, including multimodal attacks,” the Trail of Bits team concluded.
AI Research
When AI Freezes Over | Psychology Today

A phrase I’ve often clung to regarding artificial intelligence is one that is also cloaked in a bit of techno-mystery. And I bet you’ve heard it as part of the lexicon of technology and imagination: “emergent abilities.” It’s common to hear that large language models (LLMs) have these curious “emergent” behaviors that are often coupled with linguistic partners like scaling and complexity. And yes, I’m guilty too.
In AI research, this phrase first took off after a 2022 paper that described how abilities seem to appear suddenly as models scale and tasks that a small model fails at completely, a larger model suddenly handles with ease. One day a model can’t solve math problems, the next day it can. It’s an irresistible story as machines have their own little Archimedean “eureka!” moments. It’s almost as if “intelligence” has suddenly switched on.
But I’m not buying into the sensation, at least not yet. A newer 2025 study suggests we should be more careful. Instead of magical leaps, what we’re seeing looks a lot more like the physics of phase changes.
Ice, Water, and Math
Think about water. At one temperature it’s liquid, at another it’s ice. The molecules don’t become something new—they’re always two hydrogens and an oxygen—but the way they organize shifts dramatically. At the freezing point, hydrogen bonds “loosely set” into a lattice, driven by those fleeting electrical charges on the hydrogen atoms. The result is ice, the same ingredients reorganized into a solid that’s curiously less dense than liquid water. And, yes, there’s even a touch of magic in the science as ice floats. But that magic melts when you learn about Van der Waals forces.
The same kind of shift shows up in LLMs and is often mislabeled as “emergence.” In small models, the easiest strategy is positional, where computation leans on word order and simple statistical shortcuts. It’s an easy trick that works just enough to reduce error. But scale things up by using more parameters and data, and the system reorganizes. The 2025 study by Cui shows that, at a critical threshold, the model shifts into semantic mode and relies on the geometry of meaning in its high-dimensional vector space. It isn’t magic, it’s optimization. Just as water molecules align into a lattice, the model settles into a more stable solution in its mathematical landscape.
The Mirage of “Emergence”
That 2022 paper called these shifts emergent abilities. And yes, tasks like arithmetic or multi-step reasoning can look as though they “switch on.” But the model hasn’t suddenly “understood” arithmetic. What’s happening is that semantic generalization finally outperforms positional shortcuts once scale crosses a threshold. Yes, it’s a mouthful. But happening here is the computational process that is shifting from a simple “word position” in a prompt (like, the cat in the _____) to a complex, hyperdimensional matrix where semantic associations across thousands of dimensions create amazing strength to the computation.
And those sudden jumps? They’re often illusions. On simple pass/fail tests, a model can look stuck at zero until it finally tips over the line and then it seems to leap forward. In reality, it was improving step by step all along. The so-called “light-bulb moment” is really just a quirk of how we measure progress. No emergence, just math.
Why “Emergence” Is So Seductive
Why does the language of “emergence” stick? Because it borrows from biology and philosophy. Life “emerges” from chemistry as consciousness “emerges” from neurons. It makes LLMs sound like they’re undergoing cognitive leaps. Some argue emergence is a hallmark of complex systems, and there’s truth to that. So, to a degree, it does capture the idea of surprising shifts.
But we need to be careful. What’s happening here is still math, not mind. Calling it emergence risks sliding into anthropomorphism, where sudden performance shifts are mistaken for genuine understanding. And it happens all the time.
A Useful Imitation
The 2022 paper gave us the language of “emergence.” The 2025 paper shows that what looks like emergence is really closer to a high-complexity phase change. It’s the same math and the same machinery. At small scales, positional tricks (word sequence) dominate. At large scales, semantic structures (multidimensional linguistic analysis) win out.
No insight, no spark of consciousness. It’s just a system reorganizing under new constraints. And this supports my larger thesis: What we’re witnessing isn’t intelligence at all, but anti-intelligence, a powerful, useful imitation that mimics the surface of cognition without the interior substance that only a human mind offers.
Artificial Intelligence Essential Reads
So the next time you hear about an LLM with “emergent ability,” don’t imagine Archimedes leaping from his bath. Picture water freezing. The same molecules, new structure. The same math, new mode. What looks like insight is just another phase of anti-intelligence that is complex, fascinating, even beautiful in its way, but not to be mistaken for a mind.
-
Business3 days ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms3 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy1 month ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences3 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi
-
Mergers & Acquisitions2 months ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies