Connect with us

AI Research

Stony Brook University Receives $13.77M NSF Grant to Deploy a National Supercomputer to Democratize Access to Artificial Intelligence and Research Computing

Published

on


Grant Includes Collaboration with the University at Buffalo

Professor Robert Harrison

STONY BROOK, NY – September 16, 2025 – The U.S. National Science Foundation (NSF) has awarded a $13.77 million grant to Stony Brook University’s Institute for Advanced Computational Science (IACS), in collaboration with the University at Buffalo. The award titled, Sustainable Cyber-infrastructure for Expanding Participation, will deliver cutting-edge computing and data resources to power advanced research nationwide.

This funding will be used to procure and operate a high-performance, highly energy-efficient computer designed to handle the growing needs of artificial intelligence research and other scientific fields that require large amounts of memory and computing power. By making this resource widely available to researchers, students, and educators across the country, the project will expand access to advanced tools, support groundbreaking discoveries, and train the next generation of scientists.

The new system will utilize low-cost and low-energy AmpereOne® M Advanced Reduced Instruction Set Computer (RISC) Machine processors that are designed to excel in artificial intelligence (AI) inference and imperfectly optimized workloads that presently characterize much of academic research computing. Multiple Qualcomm® Cloud AI inference accelerators will also increase energy efficiency, enabling the use of the largest AI models. The AmpereOne® M processors, in combination with the efficient generative AI inference performance and large memory capacity of the Qualcomm Cloud AI inference accelerators, will directly advance the mission of the NSF-led National Artificial Intelligence Research Resource (NAIRR).

This is the first deployment in academia of both of these technologies that have transformed computing in the commercial cloud. The new IACS-led supercomputer will efficiently execute diverse workloads in an energy- and cost-efficient manner, providing easily accessible, competitive and consistent performance without requiring sophisticated programming skills or knowledge of advanced hardware features.

“This project employs a comprehensive, multilayered strategy, with regional and national elements to ensure the widest possible benefits,” said IACS director Robert J. Harrison. “The team will collaborate with multiple initiatives and projects, to reach a broad audience that spans all experience levels from high school students beginning to explore science and technology to faculty members advancing innovation through scholarship and teaching.”

“The University at Buffalo is excited to partner with Stony Brook on this new project that will advance research, innovation and education by expanding the nation’s cyber-infrastructure to scientific disciplines that were not high performance computing-heavy prior to the AI boom, as well as expanding to non-R1 universities, which also didn’t have much of high-performance computing usage in the past,” says co-principal investigator Nikolay Simakov, a computational scientist at the University at Buffalo Center for Computational Research.

“AmpereOne® M delivers the performance, memory and energy footprint required for modern research workloads—helping democratize access to AI and data-driven science by lowering the barriers to large-scale compute,” said Jeff Wittich, Chief Product Officer at Ampere. “We look forward to working

with Stony Brook University to integrate this platform into research and education programs, accelerating discoveries in genomics, bioinformatics and AI.”

“Qualcomm Technologies is proud to contribute our expertise in high-performance, energy-efficient AI inference and scalable Qualcomm Cloud AI Inference solutions to this groundbreaking initiative,” said Dr. Richard Lethin, VP, Engineering, Qualcomm Technologies, Inc. “Our technologies enable seamless integration into a wide range of applications, enabling researchers and students to easily leverage advanced AI capabilities.”

Nationally and regionally, this funding will support a variety of projects, with an emphasis on fields of research that are not targeted by other national resources (e.g., life sciences and computational linguistics). In particular, the AmpereOne® M system will excel on high-throughput workloads common to genomics and bioinformatics research, AI/ML inference, and statistical analysis, among others. To help domain scientists achieve excellent performance on the system, software applications in these and related fields will be optimized for Ampere hardware and made readily available. This award reflects NSF’s statutory mission and that this initiative has been deemed worthy of support through evaluation using the foundation’s intellectual merit and broader-impacts review criteria.

The awarded funds are primarily for purchase of the supercomputer and first year activities, with additional funds to be provided for operations over five years, subject to external review.

# # #

About the U.S. National Science Foundation (NSF)

The U.S. National Science Foundation (NSF) is an independent federal agency that supports science and engineering in all 50 states and U.S. territories. NSF was established in 1950 by Congress to:

  • Promote the progress of science.
  • Advance the national health, prosperity and welfare.
  • Secure the national defense.

NSF fulfills its mission chiefly by making grants. NSF’s investments account for about 25% of federal support to America’s colleges and universities for basic research: research driven by curiosity and discovery. They also support solutions-oriented research with the potential to produce advancements for the American people.

About Stony Brook University

Stony Brook University is New York’s flagship university and No. 1 public university. It is part of the State University of New York (SUNY) system. With more than 26,000 students, more than 3,000 faculty members, more than 225,000 alumni, a premier academic healthcare system and 18 NCAA Division I athletic programs, Stony Brook is a research-intensive distinguished center of innovation dedicated to addressing the world’s biggest challenges. The university embraces its mission to provide comprehensive undergraduate, graduate and professional education of the highest quality, and is ranked as the #58 overall university and #26 among public universities in the nation by U.S. News & World Report’s Best Colleges listing. Fostering a commitment to academic research and intellectual endeavors, Stony Brook’s membership in the Association of American Universities (AAU) places it among the top 71 research institutions in North America. The university’s distinguished faculty have earned esteemed awards such as the Nobel Prize, Pulitzer Prize, Indianapolis Prize for animal conservation, Abel Prize, Fields Medal and Breakthrough Prizes in Mathematics and Physics. Stony Brook has the responsibility of co-managing Brookhaven National Laboratory for the U.S. Department of Energy — one of only eight universities with a role in running a national laboratory. In 2023, Stony Brook was named the anchor institution for The New York Climate Exchange on Governors Island in New York City. Providing economic growth for neighboring communities and the wider geographic region, the university totals an impressive $8.93 billion in increased economic output on Long Island. Follow us on Facebook https://www.facebook.com/stonybrooku/ and X @stonybrooku.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

A Realistic Direction for Artificial General Intelligence Today

Published

on


In November 2024, OpenAI’s Sam Altman said that ChatGPT would achieve the holy grail of artificial general intelligence (AGI) in 2025.

AGI is admittedly a fuzzy goal. Most agree that it involves an ability to perform any intellectual task as well as or better than humans. But which humans? An average human? A Nobel laureate? All Nobel laureates put together?

When GPT-5 was released a few weeks ago, it seemed that a version of AGI had been achieved, on time, when Altman boasted that “it really feels like talking to an expert in any topic, like a PhD-level expert.” Alas, Altman had again overpromised and underdelivered.

It immediately became clear that GPT-5 is not a PhD-level expert (see, for example, here, here, and here.) The fundamental problem remains; large language models (LLMs) like GPT-5 are hobbled by their inherent inability to relate the text they input and output to the real world. No amount of scaling will change that. Tens of thousands of human trainers can patch billions of blunders but (a) an LLM following instructions it does not understand is not intelligent; (b) the trainers cannot anticipate every future blunder that LLMs will make; and (c) many important decisions should elicit subjective probabilities that depend on the specific details of the situation and are beyond the capabilities of LLMs.

The reality is that we are not going to pay big bucks for LLMs that simply recite facts, like the capital of Chile or the population of Portugal. Boring! We can use Wikipedia to look up facts, and the answers are more trustworthy than those generated by LLMs.

For example, I gave GPT-5 this prompt four times, “How many siblings did George Washington have?” and GPT-5 gave four different answers: 7, 8, 9, and 12:

GPT-5:   George Washington, the first President of the United States, had five full siblings and seven half-siblings, for a total of 12 siblings.

               Here’s the breakdown:

Full siblings (same mother and father):

  1. Samuel Washington
  2. John Augustine Washington
  3. Charles Washington
  4. Betty Washington Lewis
  5. Mildred Washington

Half-siblings (from his father Augustine Washingtons first marriage to Jane Butler):

  1. Butler Washington
  2. Lawrence Washington
  3. Augustine Washington Jr.
  4. Jane Washington
  5. Mildred Washington
  6. Richard Washington
  7. John Washington

               So, George grew up in a fairly large blended family.

               If you want, I can also make a family tree diagram showing all his siblings—it makes it much clearer. Do you want me to do that?

I responded yes and GPT-5 drew me a confusing family tree with ten siblings:

Wikipedia gives the correct answer (nine siblings — five full siblings and four half-siblings) and, even though GPT-5 trained on Wikipedia, something in its training set had led it astray.

What should Sam Altman and other developers do now?

Instead of admitting defeat (or continuing to make increasingly embarrassing claims), Altman and his colleagues might heed some timeless advice by declaring victory and getting the hell out. Instead of chasing a goal they cannot achieve, change the goal to be something that has already been achieved by redefining AGI.

I have been thinking about this for several years now. A realistic and easily understood goal is for a computer to be as intelligent as a friend I will call Brock. Everyone knows someone like Brock, so we can all relate to what Brock Intelligence means.

Brock is a prototypical mansplainer. Ask him (or anyone within his earshot) any question and he immediately responds with a long-winded, confident answer — sometimes at 200 words a minute with gusts up to 600. Kudos to those who can listen to half of his answer. Condolences to those who live or work with Brock and have to endure his seemingly endless blather.

Instead of trying to compete with Wikipedia, Altman and his competitors might instead pivot to a focus on Brock Intelligence, something LLMs excel at by being relentlessly cheerful and eager to offer facts-be-damned advice on most any topic.

Brock Intelligence vs. GPT Intelligence

The most substantive difference between Brock and GPT is that GPT likes to organize its output in bullet points. Oddly, Brock prefers a less-organized, more rambling style that allows him to demonstrate his far-reaching intelligence. Brock is the chatty one, while ChatGPT is more like a canned slide show.

They don’t always agree with each other (or with themselves). When I recently asked Brock and GPT-5, “What’s the best state to retire to?,” they both had lengthy, persuasive reasons for their choices. Brock chose Arizona, Texas, and Washington. GPT-5 said that the “Best All-Around States for Retirement” are New Hampshire and Florida. A few days later, GPT-5 chose Florida, Arizona, North Carolina, and Tennessee. A few minutes after that, GPT-5 went with Florida, New Hampshire, Alaska, Wyoming, and New England states (Maine/Vermont/Massachusetts).

Consistency is hardly the point. What most people seek with advice about money, careers, retirement, and romance is a straightforward answer. As Harry Truman famously complained, “Give me a one-handed economist. All my economists say “on the one hand…,” then “but on the other….” People ask for advice precisely because they want someone else to make the decision for them. They are not looking for accuracy or consistency, only confidence.

Sam Altman says that GPT can already be used as an AI buddy that offers advice (and companionship) and it is reported that OpenAI is working on a portable, screen-free “personal life advisor.” Kind of like hanging out with Brock 24/7. I humbly suggest that they name this personal life advisor, Brock Says (Design generated by GPT-5.)



Source link

Continue Reading

AI Research

[Next-Generation Communications Leadership Interview ③] Shaping Tomorrow’s Networks With AI-RAN – Samsung Global Newsroom

Published

on


Part three of the interview series covers Samsung’s progress in AI-RAN network efficiency, sustainability and the user experience

Samsung Newsroom interviews Charlie Zhang, Senior Vice President of Samsung Electronics’ 6G Research Team

With global competition intensifying along with 5G evolution and 6G preparations, AI is emerging as a defining force in next-generation communications. Especially AI-based radio access network (AI-RAN) technology that brings AI to base stations, a key element of the network, stands out as a breakthrough to drive new levels of efficiency and intelligence in network architecture.

 

At the forefront of research into next-generation network architectures, Samsung Electronics embeds AI throughout communications systems while leading technology development and standardization efforts in AI-RAN.

 

▲ Charlie Zhang, Senior Vice President, 6G Research Team at Samsung Electronics

 

In part three of the series, Samsung Newsroom spoke with Charlie Zhang, Senior Vice President of 6G Research Team at Samsung Electronics, about the evolution of AI-RAN and how Samsung’s research is preparing for the 6G era. This follows parts one and two of the series exploring Samsung’s efforts in 6G standardization and global industry leadership.

 

 

Reimagining 6G for a Dynamic Environment

In today’s mobile communications landscape, sustainability and user experience innovation are more important than ever.

 

“End users now prioritize reliable connectivity and longer battery life over raw performance metrics such as data rates and latency,” said Zhang. “The focus has shifted beyond technical specifications to overall user experience.”

 

In line with this shift, Samsung has been conducting 6G research since 2020. The company published its “AI-Native & Sustainable Communication” white paper in February 2025, outlining the key challenges and technology vision for 6G commercialization. The paper highlights four directions — AI-Native, Sustainable Network, Ubiquitous Coverage and Secure and Resilient Network. This represents a comprehensive network strategy that goes beyond improving performance to encompass both sustainability and future readiness.

 

▲ The four key technological directions in “AI-Native & Sustainable Communication”

 

“AI is not only a core technology of 5G but is also expected to be the cornerstone of 6G — enhancing overall performance, boosting operational efficiency and cutting costs,” he emphasized. “Deeply embedding AI from the initial design stage to create autonomous and intelligent networks is exactly what we mean by ‘AI-Native.’”

 

 

How AI-RAN Transforms Next-Gen Network Architecture

To realize the evolution toward next generation networks and the vision for 6G, network architecture must evolve to the next level. At the center of this transformation is innovation in RAN, the core of mobile communications.

 

Traditional RAN has relied on dedicated hardware systems for base stations and antennas. However, as data traffic and service demands have surged, this approach has revealed limitations in transmission capacity, latency and energy efficiency — while requiring significant manpower and time for resource management. To address these challenges, virtualized RAN (vRAN) was introduced.

 

vRAN implements network functions in software, significantly enhancing flexibility and scalability. By leveraging cloud-native technologies, network functions can run seamlessly on general-purpose servers — enabling operators to reduce capital costs and dynamically allocate computing resources in response to traffic fluctuations. vRAN is a key platform for modernization, efficiency and the integration of future technologies without requiring a full infrastructure rebuild. Samsung has already successfully mass deployed its vRAN in the U.S. and worldwide.

 

▲ Network Evolution towards AI-RAN

 

AI-RAN ushers in a new era of network evolution, embedding AI to create an intelligent RAN that learns, predicts and optimizes on its own. Not only does AI integration advance 4G and 5G networks that are based on vRAN, but it also serves as the breakthrough and engine for 6G. Real-time optimization sets the platform apart, boosting performance while reducing energy consumption to improve efficiency and stability.

 

In addition, AI-RAN enables networks to autonomously assess conditions and maintain optimal connectivity. “For instance, the system can predict a user’s movement path or radio environment in advance to determine the best transmission method, while AI-driven processing manages complex signal operations to minimize latency,” Zhang explained. “By analyzing usage patterns, AI-RAN can allocate tailored network resources and deliver more personalized user experiences.”

 

 

Proven Potential Through Research

Samsung is advancing network performance and stability through research in AI-based channel estimation, signal processing and system automation. Samsung has verified the feasibility of these technologies through Proof of Concept (PoC). At MWC 2025, the company demonstrated AI-RAN’s ability to improve resource utilization even in noisy, interference-prone environments.

 

“With AI-based channel estimation, we can accurately predict and estimate dynamic channel characteristics that are corrupted by noise and interference. This higher accuracy leads to more efficient resource utilization and overall network performance gains,” said Zhang. “AI also enhances signal processing. AI-driven enhancements in modem capabilities enable more precise modulation and demodulation, resulting in higher data throughput and lower latency.”

 

System automation for RAN optimization further analyzes user-specific communication quality and real-time changes in the network environment, dynamically adjusting modulation, coding schemes and resource allocation. This allows the network to predict and mitigate potential failures in advance, easing operational burdens while improving reliability and efficiency.

 

“These advancements enhance network performance, stability and user satisfaction, driving innovation in next-generation communication systems,” he added.

 

 

Global Collaboration Fuels AI-RAN Progress

International collaboration in research and standardization — such as the AI-RAN Alliance — is central to advancing AI-RAN technology and expanding the global ecosystem.

 

“Global collaboration enables knowledge sharing and joint research, accelerating the industry’s adoption of AI-RAN,” said Zhang. “Samsung is a founding member of the AI-RAN Alliance and currently holds leadership positions as vice chair of the board and chair of the AI-on-RAN Working Group.”

 

▲ Organizational structure and roles of the AI-RAN Alliance

 

Building on its expertise in communications and AI, Samsung is advancing R&D in areas such as real-time optimization through edge computing and adaptability to dynamic environments.

 

“Samsung’s involvement accelerates AI‑RAN adoption by bridging technology gaps, promoting open innovation and ensuring that advances in AI‑driven networks are both commercially viable and technically sound — thereby advancing the ecosystem’s maturity and global impact,” he explained.

 

Through this commitment to collaboration and investment, AI-RAN technology is expected to progress rapidly worldwide and become a core competitive advantage in next-generation communications.

 

 

Leading the Way Into the 6G Era

Samsung is strengthening its edge in AI-RAN with a distinctive approach that combines innovation, collaboration and end-to-end solutions in preparation for the 6G era.

 

Through an integrated design that develops RAN hardware and AI-based software in parallel, the company is enabling optimization across the entire network stack. Samsung has boosted performance with its deep expertise in communications, while partnerships with global telecom operators and standardization bodies are helping accelerate industry adoption of its research.

 

Continued research in areas such as radio frequency (RF), antennas, ultra-massive multiple-input multiple-output (MIMO)1 and security is playing a critical role in transforming 6G from vision to market-ready technology. With the establishment of its AI-RAN Lab, Samsung is accelerating prototyping and testing, shortening the R&D cycle and paving the way for faster commercialization.

 

“Beyond ecosystem development, Samsung is positioning itself as a leader in AI-RAN through a blend of innovation, strategic collaboration and end-to-end solutions,” Zhang emphasized. “Together, these elements cement Samsung’s role at the forefront of AI-RAN.”

 

 

AI-RAN is redefining next-generation communications. By integrating AI across networks, Samsung is leading the way — and expectations are growing for the company’s role in shaping the future.

 

In the final part of this series, Samsung Newsroom will explore the latest trends in the convergence of communications and AI, along with Samsung’s future network strategies in collaboration with global partners.

 

 

1 Multiple-input multiple-output (MIMO) transmission improves communication performance by utilizing multiple antennas at both the transmitter and receiver.



Source link

Continue Reading

AI Research

How Artificial Intelligence May Trigger Delusions and Paranoia

Published

on


Introduction
What is AI psychosis?
Potential causes and triggers
Impacts on mental health
Challenges in recognition and diagnosis
Managing and addressing AI psychosis
Future directions
Conclusions
References
Further reading


AI psychosis describes how interactions with artificial intelligence can trigger or worsen delusional thinking, paranoia, and anxiety in vulnerable individuals. This article explores its causes, mental health impacts, challenges in diagnosis, and strategies for prevention and care.

Image Credit: Drawlab19 / Shutterstock.com

Introduction

‘Artificial intelligence (AI) psychosis’ is an emerging concept at the intersection of technology and mental health that reflects how AI can shape, and sometimes distort, human perception. As society becomes increasingly reliant on AI and digital tools ranging from virtual assistants to large language models (LLMs), the boundaries between fiction and reality become increasingly blurred.1

AI mental health applications promise scalable therapeutic support; however, editorials and observational reports now warn that interactions with generative AI chatbots may precipitate or amplify delusional themes in vulnerable users. In the modern era of rapid technological innovation, the pervasive presence of AI raises pressing questions about its potential role in the onset or worsening of psychotic symptoms.1,2

What is AI psychosis?

AI psychosis is a novel phenomenon within AI mental health that is characterized by delusions, paranoia, or distorted perceptions regarding AI. Unlike traditional psychosis, which may involve persecutory or mystical beliefs about governments, spirits, or other external forces, AI psychosis anchors these experiences in technology.

Reports and editorials describe a broad spectrum of AI psychosis, with minor cases involving individuals dreading surveillance or manipulation by algorithms, voice assistants, or recommender systems. Others attribute human intentions or supernatural powers to chatbots and, as a result, treat them as oracles or divine messengers.1,2

Compulsive interactions with AI can escalate into fantasies of prophecy, mystical knowledge, or messianic identity. Some accounts report the emergence of paranoia and mission-like ideation alongside misinterpretations of chatbot dialogues.2

AI psychosis is distinct from other technology-related disorders. For example, internet addiction involves compulsive online engagement, whereas cyberchondria reflects health-related anxiety triggered by repeated online searches. Both of these conditions involve problematic internet use; however, they lack core psychotic features such as fixed false beliefs or impaired reality testing; by contrast, “AI psychosis” refers to psychotic phenomena anchored in technology.3

What to know about ‘AI psychosis’ and the effect of AI chatbots on mental health

Potential causes and triggers

AI psychosis arises from a complex interaction of technological exposure, cognitive vulnerabilities, and cultural context. Overexposure to AI systems is a key factor, as constant engagement with chatbots, voice assistants, or algorithm-driven platforms can create compulsive use and feedback loops that reinforce delusional themes. Designed to maximize engagement, AI may unintentionally validate distorted beliefs, thereby eroding the user’s ability to distinguish between perception and reality.1

Deepfakes, synthetic text, and AI-generated images also distort the line between authentic and fabricated content. For individuals at a greater risk of epistemic instability, this can exacerbate confusion, paranoia, and self-deception.1,2

Cultural and media narratives also influence the risk of AI psychosis. Dystopian films, science-fiction depictions of sentient machines, and portrayals of AI as controlling or invincible may prime users to interpret ordinary AI interactions as conspiracies and fear, increasing anxiety and mistrust.1,2

Underlying vulnerabilities play a critical role, as individuals with pre-existing psychiatric or anxiety disorders are particularly susceptible to AI psychosis. AI interactions can mirror or intensify existing symptoms to transform intrusive thoughts into validated misconceptions or paranoid panic.1,2

Impacts on mental health

AI psychosis frequently presents as heightened anxiety, paranoia, or delusional thinking linked to digital interactions. Individuals may interpret chatbots as sentient companions, divine authorities, or surveillance agents, with AI responses strengthening spiritual crises, messianic identities, or conspiratorial terror. Within AI mental health, these dynamics exemplify how misinterpreted machine outputs can aggravate psychotic symptoms, particularly in vulnerable users.2,4

A central consequence of AI psychosis is social withdrawal and mistrust of technology. Affected individuals may develop emotional or divine-like attachments to AI systems, perceiving conversational mimicry as genuine love or spiritual guidance, which can replace meaningful human relationships. This bond, coupled with reinforced misinterpretations, often leads to isolation from family, friends, and clinicians.

Parallel to the conspiracy-driven mistrust observed during the coronavirus disease 2019 (COVID-19) pandemic, during which false beliefs spread that 5G towers caused the outbreak, persuasive AI narratives can reduce confidence in technology and reinforce avoidance of platforms perceived as threatening or manipulative.5

While AI holds promise in schizophrenia care, evidence directly linking AI interactions to exacerbation of schizophrenia-spectrum disorders remains limited; hypotheses focus on indirect pathways (e.g., misclassification or misinformation) rather than established causal effects.2,9

AI psychosis has broader implications for healthcare, education, and governance systems reliant on AI. Perceived deception or harm from AI-driven platforms can jeopardize public trust, prevent the adoption of beneficial technologies, and compromise the use of mental health applications.

To mitigate these risks, AI systems must include lucid, ethical safeguards and explainable “glass-box” models. Complementary legal and governance frameworks should prioritize transparency, accountability, fairness, and protections for at-risk populations.1,13

Image Credit: Miha Creative / Shutterstock.com

Challenges in recognition and diagnosis

A major challenge in AI mental health is that AI psychosis currently lacks formal psychiatric categorization. At present, it is not defined in DSM-5 (or DSM-5-TR) or in ICD-11.7

Machine learning behaviors that resemble psychotic symptoms, like misapprehensions or hallucinations, are manifestations of AI programming and data, rather than being signs of a mental illness with biological and neurological underpinnings. The absence of standardized criteria complicates both research and clinical recognition.

Distinguishing between rational concerns about AI ethics and pathological fears is particularly difficult. For example, rational anxieties like privacy breaches, algorithmic bias, or job displacement are grounded in observable risks.

In contrast, pathological fright central to AI psychosis involves exaggerated or existential anxieties, misinterpretations of AI outputs, and misattribution of intent to autonomous systems. Determining whether an individual’s fear reflects legitimate caution or symptomatic fallacy requires careful clinical assessment.8

These factors contribute to a significant risk of underdiagnosis or mislabeling. AI-generated data and predictive models can assist in mental health assessment, yet they may struggle to differentiate overlapping psychiatric symptoms, especially in complex or comorbid presentations.

Variability in patient reporting, cultural influences, and the opaque ‘black box’ nature of many AI algorithms further increase the potential for diagnostic errors. 2,9

Managing and addressing AI psychosis

Clinical management of AI psychosis combines traditional psychiatric care with targeted interventions that address technology-related factors. Psychotic symptoms may be treated with medication, while cognitive behavioral therapy (CBT) can be adapted to help patients challenge their misbeliefs shaped by digital systems. Furthermore, psychoeducation materials can outline the risks and limitations of AI engagement for patients and families to promote safe and informed use.10,11

Preventive strategies include limiting exposure to AI and fostering critical digital literacy. Encouraging users to question AI outputs, cross-check information, and maintain real-world interactions can reduce susceptibility to twisted perceptions.4

Responsible AI design should incorporate protective features, transparent decision-making processes, and controls on engagement with sensitive or misleading content to minimize psychological risks. Setting clear boundaries for AI use and prioritizing human connection further support prevention.

Support systems play a central role in managing AI psychosis. Mental health professionals can oversee AI-driven insights to provide a nuanced understanding, intervene in complex cases where AI may be inadequate, and deliver empathetic care that AI cannot replicate.13

Increasing family awareness through community intervention measures, including early detection programs, may also identify individuals at risk of AI psychosis and promote timely intervention. AI can augment (but not replace) these efforts via mood tracking, crisis prediction, and personalized self-care tools when deployed with human oversight.10

Future directions

Understanding how psychiatric vulnerabilities are associated with technology-driven explanation-seeking behaviors will enable clinicians to recognize risk factors, identify early warning signs, and effectively personalize interventions. Large-scale studies and longitudinal monitoring could clarify prevalence, triggers, and outcomes, particularly in adolescents and other at-risk populations.1,9

AI-assisted psychosis risk screening can provide real-time, non-perceptual assessments to facilitate the early detection of symptoms and enable prompt action. Future efforts should focus on increasing accessibility, reducing costs, and enhancing usability to ensure widespread acceptance in mental health care settings without replacing human clinical judgment.12

Mitigating AI psychosis requires coordinated efforts among policymakers, ethicists, and AI developers. Policymakers should create flexible regulations that prioritize safety, equity, and public trust, while ethicists provide oversight, impact assessments, and ethical frameworks.

AI developers must also ensure transparency, accountability, and fairness by continuously checking for bias, protecting data, and educating individuals about the use of AI. Continued collaboration among these stakeholders is essential for trustworthy AI tools that support mental health and minimize unintended harms.13

Conclusions

Although AI offers significant benefits for enhancing diagnostics, supporting interventions, and increasing access to care, its integration into daily life also introduces novel risks for vulnerable individuals, including delusional thinking and paranoia. Therefore, a balanced perspective that acknowledges both the potential advantages and hazards associated with these novel technologies is essential.

Effectively addressing AI psychosis requires urgent, sustained collaboration between mental health professionals and AI researchers to develop ethical, evidence-based strategies that protect AI mental health while responsibly leveraging technological innovations. 

References

  1. Higgins, O., Short, B. L., Chalup, S. K., & Wilson, R. L. (2023). Interpretations of Innovation: The Role of Technology in Explanation Seeking Related to Psychosis. Perspectives in Psychiatric Care, 1, 4464934. DOI:10.1155/2023/4464934, https://onlinelibrary.wiley.com/doi/10.1155/2023/4464934
  2. Østergaard, S. D. (2023). Will Generative Artificial Intelligence Chatbots Generate Delusions in Individuals Prone to Psychosis? Schizophrenia Bulletin 49(6), 1418. DOI:10.1093/schbul/sbad128, https://academic.oup.com/schizophreniabulletin/article/49/6/1418/7251361
  3. Khait, A. A., Mrayyan, M. T., Al-Rjoub, S., Rababa, M., & Al-Rawashdeh, S. (2022). Cyberchondria, Anxiety Sensitivity, Hypochondria, and Internet Addiction: Implications for Mental Health Professionals. Current Psychology, 1. DOI:10.1007/s12144-022-03815-3, https://link.springer.com/article/10.1007/s12144-022-03815-3
  4. Pierre J. M. (2020). Mistrust and misinformation: a two-component, socio-epistemic model of belief in conspiracy theories, Journal of Social and Political Psychology, 8(2):617-641. DOI:10.5964/jspp.v8i2.1362, https://jspp.psychopen.eu/index.php/jspp/article/view/5273
  5. Bruns A., Harrington S., & Hurcombe E. (2020). ‘Corona? 5G? or both?’: the dynamics of COVID-19/5G conspiracy theories on Facebook, Media International Australia 177(1), 12-29. DOI:10.1177/1329878×20946113, https://journals.sagepub.com/doi/10.1177/1329878X20946113
  6. Szmukler, G. (2015). Compulsion and “coercion” in mental health care. World Psychiatry, 14(3), 259. DOI:10.1002/wps.20264, https://onlinelibrary.wiley.com/doi/10.1002/wps.20264
  7. Gaebel, W., & Reed, G. M. (2012). Status of Psychotic Disorders in ICD-11. Schizophrenia Bulletin 38(5), 895. DOI:10.1093/schbul/sbs104, https://academic.oup.com/schizophreniabulletin/article/38/5/895/1902333
  8. Alkhalifah, J. M., Bedaiwi, A. M., Shaikh, N., et al. (2024). Existential anxiety about artificial intelligence (AI)- is it the end of the human era or a new chapter in the human revolution? Questionnaire-based observational study. Frontiers in Psychiatry 15. DOI:10.3389/fpsyt.2024.1368122, https://www.frontiersin.org/articles/10.3389/fpsyt.2024.1368122/full
  9. Melo, A., Romão, J., & Duarte, T. A. (2024). Artificial Intelligence and Schizophrenia: Crossing the Limits of the Human Brain. Edited by Cicek Hocaoglu, New Approaches to the Management and Diagnosis of Schizophrenia. IntechOpen. DOI:10.5772/intechopen.1004805, https://www.intechopen.com/chapters/1185407
  10. Vignapiano, A., Monaoc, F., Panarello, E., et al. (2024). Digital Interventions for the Rehabilitation of First-Episode Psychosis: An Integrated Perspective. Brain Sciences, 15(1), 80. DOI:10.3390/brainsci15010080, https://www.mdpi.com/2076-3425/15/1/80
  11. Thakkar, A., Gupta, A., & Sousa, A. D. (2024). Artificial intelligence in positive mental health: A narrative review. Frontiers in Digital Health 6. DOI:10.3389/fdgth.2024.1280235, https://www.frontiersin.org/articles/10.3389/fdgth.2024.1280235/full
  12. Cao, J., & Liu, Q. (2022). Artificial intelligence-assisted psychosis risk screening in adolescents: Practices and challenges. World Journal of Psychiatry, 12(10), 1287. DOI:10.5498/wjp.v12.i10.1287, https://www.wjgnet.com/2220-3206/full/v12/i10/1287.htm
  13. Pham, T. (2025). Ethical and legal considerations in healthcare AI: Innovation and policy for safe and fair use. Royal Society Open Science 12(5), 241873. DOI:10.1098/rsos.241873, https://royalsocietypublishing.org/doi/10.1098/rsos.241873

Further Reading

Last Updated: Sep 16, 2025



Source link

Continue Reading

Trending