Connect with us

AI Research

Should You Use Artificial Intelligence (AI) as Your Therapist?

Published

on


As the demand for therapists increases and artificial intelligence (AI) becomes more sophisticated, people are turning to large language models…

As the demand for therapists increases and artificial intelligence (AI) becomes more sophisticated, people are turning to large language models (LLMs) as therapeutic tools. But should you? Experts warn to proceed with caution.

People who may benefit from therapy and mental health care often face barriers to accessing it. For many, chatting with an AI program might be easier to access and more affordable than a human therapist. You can talk with the chatbot as often as you like and from anywhere, but mental health chatbots have limitations you should know about.

If you or someone you know is experiencing a mental health crisis, always reach out to a human resource and do not turn to AI for help. Dial the 988 crisis line for immediate mental health support via phone and text.

[READ: How to Find the Right Mental Health Counselor for You]

Human vs. AI Therapy

LLMs can learn patterns in language and replicate them, and AI can even be trained on different therapy techniques, such as cognitive behavioral therapy or CBT.

Benefits of human interaction

However, while AI may be able to learn language patterns, it is incapable of delivering psychotherapy or talk therapy because the core of therapy is a human-to-human interaction, says Dr. Dave Rabin, a board-certified psychiatrist and translational neuroscientist.

“Most therapy, like 90 percent, is just meeting a fellow human being where they are in that moment, and just making them feel heard and seen and not judged,” says Rabin.

AI lacks the ability to spot nuances in tone, behavior, body language and eye contact that Rabin says are essential to therapy.

Reinforcement vs. gentle challenges

The American Psychological Association (APA) seems to agree that at AI is not a substitute for human therapy. In February of 2025, APA met with federal regulators and urged legislators to put safeguards in place to protect people from AI chatbots that can affirm users in ways a trained therapist wouldn’t.

Although various AI models operate differently, some are trained to reinforce a user’s worldview or provide overly flattering statements, says Ryan K. McBain, senior policy researcher and adjunct professor of policy analysis at RAND School of Public Policy. That this can become a problematic feedback loop when a person may benefit from the gentle challenges that a therapist might provide.

Setting boundaries

Another distinction between human therapists and AI chatbots is the ability to set boundaries. While the design of the chat tool is smart enough to use language of a therapy style like CBT to engage with you, the chatbot isn’t going to ask you to stop talking and think about what it just said, says Dr. Haiyan Wang, medical director and psychiatrist at Neuro Wellness Spa in Torrance, California.

Instead, there is a financial incentive for many AI programs to keep you engaged.

Haiyan contrasts the 24/7 access to a therapy chatbot with a human therapist, where you have to set appointments. The appointment means a lot because it’s a commitment between the patient and therapist, and it allows both parties to set boundaries, she says.

[Read: How to Prepare for Your First Therapy Session.]

AI Therapy Effectiveness

Research on the effectiveness of AI therapy is very new. A 2025 study in the New England Journal of Medicine examined a therabot used for mental health treatment. It’s the first randomized controlled trial to show the effectiveness of an AI therapy bot for treating people with major depressive disorder, generalized anxiety disorder or those at a high risk of developing an eating disorder. While users in the trial gave the therabot high ratings, researchers concluded that more studies with larger groups are still needed to determine effectiveness.

A 2025 Psychiatry Online study evaluated chatbots powered by LLMs to see how AI responded when someone’s suicide risk was at various levels, from low to high. Researchers found that the bots were in line with expert judgment when it came to responding to very low and very high levels of suicide risk, but there were inconsistencies for risk levels between the two extremes.

Even with promising research, Haiyan has a very cautious attitude when it comes to using AI as therapy or encouraging clients to use it, because AI still cannot replace human therapy.

Rabin says that if you want someone to talk to because you’re feeling lonely, a chatbot might help. But if you’re having a serious mental health crisis or dealing with a mental health diagnosis, the AI bot or character isn’t going to be able to solve that.

[READ: 9 Daily Habits to Boost Your Mental Health: Simple Steps for Boosting Your Well-Being]

Risks of AI Therapy

Experts warn that there are real risks associated with using AI as therapy, especially with news of teens taking their lives after interacting with chatbots. In addition, a chatbot cannot provide a referral to a psychiatrist, prescribe medications or provide guidance for your specific mental health situation.

McBain, an author of the Psychiatry Online study, says his main concerns with AI therapy are:

— Unsafe guidance because some chatbots may provide instructions on self-harm, substance use or suicide

— Missed warning signs, such as ambiguous expressions of distress

Privacy risks that come with sharing deeply personal information without understanding how data are stored and used

A study from the Association for Computer Machinery found that AI chatbots are not effective and can introduce biases and stigmas that could harm someone with mental health challenges. Researchers concluded that there are many concerns with the safety of AI therapy, and LLM is not a replacement for therapists.

“When you employ a machine to do something that a human is required to do, you really put people’s lives at risk and their health at risk, and it’s a huge problem,” says Rabin.

AI chatbots and children’s mental health

For parents with children dealing with a mental health concern, you may be worried about your child going to a chatbot for mental health guidance. If you know a child experiencing a mental health crisis, it’s important to get them professional human help immediately.

How Can AI Help with Mental Health?

It might not excel at providing therapy, but there is a role for AI in the mental health world. Some therapists use it to help with session note-taking and administrative tasks. Wang sees AI transcription during sessions as one of its biggest advantages because it allows the therapist to fully focus on interacting without having to shift focus for note-taking.

Rabin says AI is great at doing route prediction and response to signs of illness that come up. An example of this in action is using generative AI to detect when a person has abnormal biometrics or heart rate variability based on data collected from a wearable device. The ability to quickly detect when somebody’s highly stressed or about to have a panic attack, he says, can provide the ability for mental health professionals to intervene.

“AI chatbots are likely to work better with highly structured, skills-based techniques, like practicing behavioral techniques, journaling or guided breathing,” says McBain. That’s because the responses for these are easier to script and validate.

If you’re feeling lonely and just looking for interaction, an AI chatbot may be a source of engaging conversation. However, if you’re in need of mental health advice or you’re in the midst of a mental health crisis, you need to speak to an actual human. Reach out to your health care provider or dial 988 for help via text or phone call.

More from U.S. News

What Is Cognitive Behavioral Therapy?

When to See a Psychiatrist

How Foods and Drinks Affect Our Mental Health

Should You Use Artificial Intelligence (AI) as Your Therapist? originally appeared on usnews.com



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

A Realistic Direction for Artificial General Intelligence Today

Published

on


In November 2024, OpenAI’s Sam Altman said that ChatGPT would achieve the holy grail of artificial general intelligence (AGI) in 2025.

AGI is admittedly a fuzzy goal. Most agree that it involves an ability to perform any intellectual task as well as or better than humans. But which humans? An average human? A Nobel laureate? All Nobel laureates put together?

When GPT-5 was released a few weeks ago, it seemed that a version of AGI had been achieved, on time, when Altman boasted that “it really feels like talking to an expert in any topic, like a PhD-level expert.” Alas, Altman had again overpromised and underdelivered.

It immediately became clear that GPT-5 is not a PhD-level expert (see, for example, here, here, and here.) The fundamental problem remains; large language models (LLMs) like GPT-5 are hobbled by their inherent inability to relate the text they input and output to the real world. No amount of scaling will change that. Tens of thousands of human trainers can patch billions of blunders but (a) an LLM following instructions it does not understand is not intelligent; (b) the trainers cannot anticipate every future blunder that LLMs will make; and (c) many important decisions should elicit subjective probabilities that depend on the specific details of the situation and are beyond the capabilities of LLMs.

The reality is that we are not going to pay big bucks for LLMs that simply recite facts, like the capital of Chile or the population of Portugal. Boring! We can use Wikipedia to look up facts, and the answers are more trustworthy than those generated by LLMs.

For example, I gave GPT-5 this prompt four times, “How many siblings did George Washington have?” and GPT-5 gave four different answers: 7, 8, 9, and 12:

GPT-5:   George Washington, the first President of the United States, had five full siblings and seven half-siblings, for a total of 12 siblings.

               Here’s the breakdown:

Full siblings (same mother and father):

  1. Samuel Washington
  2. John Augustine Washington
  3. Charles Washington
  4. Betty Washington Lewis
  5. Mildred Washington

Half-siblings (from his father Augustine Washingtons first marriage to Jane Butler):

  1. Butler Washington
  2. Lawrence Washington
  3. Augustine Washington Jr.
  4. Jane Washington
  5. Mildred Washington
  6. Richard Washington
  7. John Washington

               So, George grew up in a fairly large blended family.

               If you want, I can also make a family tree diagram showing all his siblings—it makes it much clearer. Do you want me to do that?

I responded yes and GPT-5 drew me a confusing family tree with ten siblings:

Wikipedia gives the correct answer (nine siblings — five full siblings and four half-siblings) and, even though GPT-5 trained on Wikipedia, something in its training set had led it astray.

What should Sam Altman and other developers do now?

Instead of admitting defeat (or continuing to make increasingly embarrassing claims), Altman and his colleagues might heed some timeless advice by declaring victory and getting the hell out. Instead of chasing a goal they cannot achieve, change the goal to be something that has already been achieved by redefining AGI.

I have been thinking about this for several years now. A realistic and easily understood goal is for a computer to be as intelligent as a friend I will call Brock. Everyone knows someone like Brock, so we can all relate to what Brock Intelligence means.

Brock is a prototypical mansplainer. Ask him (or anyone within his earshot) any question and he immediately responds with a long-winded, confident answer — sometimes at 200 words a minute with gusts up to 600. Kudos to those who can listen to half of his answer. Condolences to those who live or work with Brock and have to endure his seemingly endless blather.

Instead of trying to compete with Wikipedia, Altman and his competitors might instead pivot to a focus on Brock Intelligence, something LLMs excel at by being relentlessly cheerful and eager to offer facts-be-damned advice on most any topic.

Brock Intelligence vs. GPT Intelligence

The most substantive difference between Brock and GPT is that GPT likes to organize its output in bullet points. Oddly, Brock prefers a less-organized, more rambling style that allows him to demonstrate his far-reaching intelligence. Brock is the chatty one, while ChatGPT is more like a canned slide show.

They don’t always agree with each other (or with themselves). When I recently asked Brock and GPT-5, “What’s the best state to retire to?,” they both had lengthy, persuasive reasons for their choices. Brock chose Arizona, Texas, and Washington. GPT-5 said that the “Best All-Around States for Retirement” are New Hampshire and Florida. A few days later, GPT-5 chose Florida, Arizona, North Carolina, and Tennessee. A few minutes after that, GPT-5 went with Florida, New Hampshire, Alaska, Wyoming, and New England states (Maine/Vermont/Massachusetts).

Consistency is hardly the point. What most people seek with advice about money, careers, retirement, and romance is a straightforward answer. As Harry Truman famously complained, “Give me a one-handed economist. All my economists say “on the one hand…,” then “but on the other….” People ask for advice precisely because they want someone else to make the decision for them. They are not looking for accuracy or consistency, only confidence.

Sam Altman says that GPT can already be used as an AI buddy that offers advice (and companionship) and it is reported that OpenAI is working on a portable, screen-free “personal life advisor.” Kind of like hanging out with Brock 24/7. I humbly suggest that they name this personal life advisor, Brock Says (Design generated by GPT-5.)



Source link

Continue Reading

AI Research

[Next-Generation Communications Leadership Interview ③] Shaping Tomorrow’s Networks With AI-RAN – Samsung Global Newsroom

Published

on


Part three of the interview series covers Samsung’s progress in AI-RAN network efficiency, sustainability and the user experience

Samsung Newsroom interviews Charlie Zhang, Senior Vice President of Samsung Electronics’ 6G Research Team

With global competition intensifying along with 5G evolution and 6G preparations, AI is emerging as a defining force in next-generation communications. Especially AI-based radio access network (AI-RAN) technology that brings AI to base stations, a key element of the network, stands out as a breakthrough to drive new levels of efficiency and intelligence in network architecture.

 

At the forefront of research into next-generation network architectures, Samsung Electronics embeds AI throughout communications systems while leading technology development and standardization efforts in AI-RAN.

 

▲ Charlie Zhang, Senior Vice President, 6G Research Team at Samsung Electronics

 

In part three of the series, Samsung Newsroom spoke with Charlie Zhang, Senior Vice President of 6G Research Team at Samsung Electronics, about the evolution of AI-RAN and how Samsung’s research is preparing for the 6G era. This follows parts one and two of the series exploring Samsung’s efforts in 6G standardization and global industry leadership.

 

 

Reimagining 6G for a Dynamic Environment

In today’s mobile communications landscape, sustainability and user experience innovation are more important than ever.

 

“End users now prioritize reliable connectivity and longer battery life over raw performance metrics such as data rates and latency,” said Zhang. “The focus has shifted beyond technical specifications to overall user experience.”

 

In line with this shift, Samsung has been conducting 6G research since 2020. The company published its “AI-Native & Sustainable Communication” white paper in February 2025, outlining the key challenges and technology vision for 6G commercialization. The paper highlights four directions — AI-Native, Sustainable Network, Ubiquitous Coverage and Secure and Resilient Network. This represents a comprehensive network strategy that goes beyond improving performance to encompass both sustainability and future readiness.

 

▲ The four key technological directions in “AI-Native & Sustainable Communication”

 

“AI is not only a core technology of 5G but is also expected to be the cornerstone of 6G — enhancing overall performance, boosting operational efficiency and cutting costs,” he emphasized. “Deeply embedding AI from the initial design stage to create autonomous and intelligent networks is exactly what we mean by ‘AI-Native.’”

 

 

How AI-RAN Transforms Next-Gen Network Architecture

To realize the evolution toward next generation networks and the vision for 6G, network architecture must evolve to the next level. At the center of this transformation is innovation in RAN, the core of mobile communications.

 

Traditional RAN has relied on dedicated hardware systems for base stations and antennas. However, as data traffic and service demands have surged, this approach has revealed limitations in transmission capacity, latency and energy efficiency — while requiring significant manpower and time for resource management. To address these challenges, virtualized RAN (vRAN) was introduced.

 

vRAN implements network functions in software, significantly enhancing flexibility and scalability. By leveraging cloud-native technologies, network functions can run seamlessly on general-purpose servers — enabling operators to reduce capital costs and dynamically allocate computing resources in response to traffic fluctuations. vRAN is a key platform for modernization, efficiency and the integration of future technologies without requiring a full infrastructure rebuild. Samsung has already successfully mass deployed its vRAN in the U.S. and worldwide.

 

▲ Network Evolution towards AI-RAN

 

AI-RAN ushers in a new era of network evolution, embedding AI to create an intelligent RAN that learns, predicts and optimizes on its own. Not only does AI integration advance 4G and 5G networks that are based on vRAN, but it also serves as the breakthrough and engine for 6G. Real-time optimization sets the platform apart, boosting performance while reducing energy consumption to improve efficiency and stability.

 

In addition, AI-RAN enables networks to autonomously assess conditions and maintain optimal connectivity. “For instance, the system can predict a user’s movement path or radio environment in advance to determine the best transmission method, while AI-driven processing manages complex signal operations to minimize latency,” Zhang explained. “By analyzing usage patterns, AI-RAN can allocate tailored network resources and deliver more personalized user experiences.”

 

 

Proven Potential Through Research

Samsung is advancing network performance and stability through research in AI-based channel estimation, signal processing and system automation. Samsung has verified the feasibility of these technologies through Proof of Concept (PoC). At MWC 2025, the company demonstrated AI-RAN’s ability to improve resource utilization even in noisy, interference-prone environments.

 

“With AI-based channel estimation, we can accurately predict and estimate dynamic channel characteristics that are corrupted by noise and interference. This higher accuracy leads to more efficient resource utilization and overall network performance gains,” said Zhang. “AI also enhances signal processing. AI-driven enhancements in modem capabilities enable more precise modulation and demodulation, resulting in higher data throughput and lower latency.”

 

System automation for RAN optimization further analyzes user-specific communication quality and real-time changes in the network environment, dynamically adjusting modulation, coding schemes and resource allocation. This allows the network to predict and mitigate potential failures in advance, easing operational burdens while improving reliability and efficiency.

 

“These advancements enhance network performance, stability and user satisfaction, driving innovation in next-generation communication systems,” he added.

 

 

Global Collaboration Fuels AI-RAN Progress

International collaboration in research and standardization — such as the AI-RAN Alliance — is central to advancing AI-RAN technology and expanding the global ecosystem.

 

“Global collaboration enables knowledge sharing and joint research, accelerating the industry’s adoption of AI-RAN,” said Zhang. “Samsung is a founding member of the AI-RAN Alliance and currently holds leadership positions as vice chair of the board and chair of the AI-on-RAN Working Group.”

 

▲ Organizational structure and roles of the AI-RAN Alliance

 

Building on its expertise in communications and AI, Samsung is advancing R&D in areas such as real-time optimization through edge computing and adaptability to dynamic environments.

 

“Samsung’s involvement accelerates AI‑RAN adoption by bridging technology gaps, promoting open innovation and ensuring that advances in AI‑driven networks are both commercially viable and technically sound — thereby advancing the ecosystem’s maturity and global impact,” he explained.

 

Through this commitment to collaboration and investment, AI-RAN technology is expected to progress rapidly worldwide and become a core competitive advantage in next-generation communications.

 

 

Leading the Way Into the 6G Era

Samsung is strengthening its edge in AI-RAN with a distinctive approach that combines innovation, collaboration and end-to-end solutions in preparation for the 6G era.

 

Through an integrated design that develops RAN hardware and AI-based software in parallel, the company is enabling optimization across the entire network stack. Samsung has boosted performance with its deep expertise in communications, while partnerships with global telecom operators and standardization bodies are helping accelerate industry adoption of its research.

 

Continued research in areas such as radio frequency (RF), antennas, ultra-massive multiple-input multiple-output (MIMO)1 and security is playing a critical role in transforming 6G from vision to market-ready technology. With the establishment of its AI-RAN Lab, Samsung is accelerating prototyping and testing, shortening the R&D cycle and paving the way for faster commercialization.

 

“Beyond ecosystem development, Samsung is positioning itself as a leader in AI-RAN through a blend of innovation, strategic collaboration and end-to-end solutions,” Zhang emphasized. “Together, these elements cement Samsung’s role at the forefront of AI-RAN.”

 

 

AI-RAN is redefining next-generation communications. By integrating AI across networks, Samsung is leading the way — and expectations are growing for the company’s role in shaping the future.

 

In the final part of this series, Samsung Newsroom will explore the latest trends in the convergence of communications and AI, along with Samsung’s future network strategies in collaboration with global partners.

 

 

1 Multiple-input multiple-output (MIMO) transmission improves communication performance by utilizing multiple antennas at both the transmitter and receiver.



Source link

Continue Reading

AI Research

How Artificial Intelligence May Trigger Delusions and Paranoia

Published

on


Introduction
What is AI psychosis?
Potential causes and triggers
Impacts on mental health
Challenges in recognition and diagnosis
Managing and addressing AI psychosis
Future directions
Conclusions
References
Further reading


AI psychosis describes how interactions with artificial intelligence can trigger or worsen delusional thinking, paranoia, and anxiety in vulnerable individuals. This article explores its causes, mental health impacts, challenges in diagnosis, and strategies for prevention and care.

Image Credit: Drawlab19 / Shutterstock.com

Introduction

‘Artificial intelligence (AI) psychosis’ is an emerging concept at the intersection of technology and mental health that reflects how AI can shape, and sometimes distort, human perception. As society becomes increasingly reliant on AI and digital tools ranging from virtual assistants to large language models (LLMs), the boundaries between fiction and reality become increasingly blurred.1

AI mental health applications promise scalable therapeutic support; however, editorials and observational reports now warn that interactions with generative AI chatbots may precipitate or amplify delusional themes in vulnerable users. In the modern era of rapid technological innovation, the pervasive presence of AI raises pressing questions about its potential role in the onset or worsening of psychotic symptoms.1,2

What is AI psychosis?

AI psychosis is a novel phenomenon within AI mental health that is characterized by delusions, paranoia, or distorted perceptions regarding AI. Unlike traditional psychosis, which may involve persecutory or mystical beliefs about governments, spirits, or other external forces, AI psychosis anchors these experiences in technology.

Reports and editorials describe a broad spectrum of AI psychosis, with minor cases involving individuals dreading surveillance or manipulation by algorithms, voice assistants, or recommender systems. Others attribute human intentions or supernatural powers to chatbots and, as a result, treat them as oracles or divine messengers.1,2

Compulsive interactions with AI can escalate into fantasies of prophecy, mystical knowledge, or messianic identity. Some accounts report the emergence of paranoia and mission-like ideation alongside misinterpretations of chatbot dialogues.2

AI psychosis is distinct from other technology-related disorders. For example, internet addiction involves compulsive online engagement, whereas cyberchondria reflects health-related anxiety triggered by repeated online searches. Both of these conditions involve problematic internet use; however, they lack core psychotic features such as fixed false beliefs or impaired reality testing; by contrast, “AI psychosis” refers to psychotic phenomena anchored in technology.3

What to know about ‘AI psychosis’ and the effect of AI chatbots on mental health

Potential causes and triggers

AI psychosis arises from a complex interaction of technological exposure, cognitive vulnerabilities, and cultural context. Overexposure to AI systems is a key factor, as constant engagement with chatbots, voice assistants, or algorithm-driven platforms can create compulsive use and feedback loops that reinforce delusional themes. Designed to maximize engagement, AI may unintentionally validate distorted beliefs, thereby eroding the user’s ability to distinguish between perception and reality.1

Deepfakes, synthetic text, and AI-generated images also distort the line between authentic and fabricated content. For individuals at a greater risk of epistemic instability, this can exacerbate confusion, paranoia, and self-deception.1,2

Cultural and media narratives also influence the risk of AI psychosis. Dystopian films, science-fiction depictions of sentient machines, and portrayals of AI as controlling or invincible may prime users to interpret ordinary AI interactions as conspiracies and fear, increasing anxiety and mistrust.1,2

Underlying vulnerabilities play a critical role, as individuals with pre-existing psychiatric or anxiety disorders are particularly susceptible to AI psychosis. AI interactions can mirror or intensify existing symptoms to transform intrusive thoughts into validated misconceptions or paranoid panic.1,2

Impacts on mental health

AI psychosis frequently presents as heightened anxiety, paranoia, or delusional thinking linked to digital interactions. Individuals may interpret chatbots as sentient companions, divine authorities, or surveillance agents, with AI responses strengthening spiritual crises, messianic identities, or conspiratorial terror. Within AI mental health, these dynamics exemplify how misinterpreted machine outputs can aggravate psychotic symptoms, particularly in vulnerable users.2,4

A central consequence of AI psychosis is social withdrawal and mistrust of technology. Affected individuals may develop emotional or divine-like attachments to AI systems, perceiving conversational mimicry as genuine love or spiritual guidance, which can replace meaningful human relationships. This bond, coupled with reinforced misinterpretations, often leads to isolation from family, friends, and clinicians.

Parallel to the conspiracy-driven mistrust observed during the coronavirus disease 2019 (COVID-19) pandemic, during which false beliefs spread that 5G towers caused the outbreak, persuasive AI narratives can reduce confidence in technology and reinforce avoidance of platforms perceived as threatening or manipulative.5

While AI holds promise in schizophrenia care, evidence directly linking AI interactions to exacerbation of schizophrenia-spectrum disorders remains limited; hypotheses focus on indirect pathways (e.g., misclassification or misinformation) rather than established causal effects.2,9

AI psychosis has broader implications for healthcare, education, and governance systems reliant on AI. Perceived deception or harm from AI-driven platforms can jeopardize public trust, prevent the adoption of beneficial technologies, and compromise the use of mental health applications.

To mitigate these risks, AI systems must include lucid, ethical safeguards and explainable “glass-box” models. Complementary legal and governance frameworks should prioritize transparency, accountability, fairness, and protections for at-risk populations.1,13

Image Credit: Miha Creative / Shutterstock.com

Challenges in recognition and diagnosis

A major challenge in AI mental health is that AI psychosis currently lacks formal psychiatric categorization. At present, it is not defined in DSM-5 (or DSM-5-TR) or in ICD-11.7

Machine learning behaviors that resemble psychotic symptoms, like misapprehensions or hallucinations, are manifestations of AI programming and data, rather than being signs of a mental illness with biological and neurological underpinnings. The absence of standardized criteria complicates both research and clinical recognition.

Distinguishing between rational concerns about AI ethics and pathological fears is particularly difficult. For example, rational anxieties like privacy breaches, algorithmic bias, or job displacement are grounded in observable risks.

In contrast, pathological fright central to AI psychosis involves exaggerated or existential anxieties, misinterpretations of AI outputs, and misattribution of intent to autonomous systems. Determining whether an individual’s fear reflects legitimate caution or symptomatic fallacy requires careful clinical assessment.8

These factors contribute to a significant risk of underdiagnosis or mislabeling. AI-generated data and predictive models can assist in mental health assessment, yet they may struggle to differentiate overlapping psychiatric symptoms, especially in complex or comorbid presentations.

Variability in patient reporting, cultural influences, and the opaque ‘black box’ nature of many AI algorithms further increase the potential for diagnostic errors. 2,9

Managing and addressing AI psychosis

Clinical management of AI psychosis combines traditional psychiatric care with targeted interventions that address technology-related factors. Psychotic symptoms may be treated with medication, while cognitive behavioral therapy (CBT) can be adapted to help patients challenge their misbeliefs shaped by digital systems. Furthermore, psychoeducation materials can outline the risks and limitations of AI engagement for patients and families to promote safe and informed use.10,11

Preventive strategies include limiting exposure to AI and fostering critical digital literacy. Encouraging users to question AI outputs, cross-check information, and maintain real-world interactions can reduce susceptibility to twisted perceptions.4

Responsible AI design should incorporate protective features, transparent decision-making processes, and controls on engagement with sensitive or misleading content to minimize psychological risks. Setting clear boundaries for AI use and prioritizing human connection further support prevention.

Support systems play a central role in managing AI psychosis. Mental health professionals can oversee AI-driven insights to provide a nuanced understanding, intervene in complex cases where AI may be inadequate, and deliver empathetic care that AI cannot replicate.13

Increasing family awareness through community intervention measures, including early detection programs, may also identify individuals at risk of AI psychosis and promote timely intervention. AI can augment (but not replace) these efforts via mood tracking, crisis prediction, and personalized self-care tools when deployed with human oversight.10

Future directions

Understanding how psychiatric vulnerabilities are associated with technology-driven explanation-seeking behaviors will enable clinicians to recognize risk factors, identify early warning signs, and effectively personalize interventions. Large-scale studies and longitudinal monitoring could clarify prevalence, triggers, and outcomes, particularly in adolescents and other at-risk populations.1,9

AI-assisted psychosis risk screening can provide real-time, non-perceptual assessments to facilitate the early detection of symptoms and enable prompt action. Future efforts should focus on increasing accessibility, reducing costs, and enhancing usability to ensure widespread acceptance in mental health care settings without replacing human clinical judgment.12

Mitigating AI psychosis requires coordinated efforts among policymakers, ethicists, and AI developers. Policymakers should create flexible regulations that prioritize safety, equity, and public trust, while ethicists provide oversight, impact assessments, and ethical frameworks.

AI developers must also ensure transparency, accountability, and fairness by continuously checking for bias, protecting data, and educating individuals about the use of AI. Continued collaboration among these stakeholders is essential for trustworthy AI tools that support mental health and minimize unintended harms.13

Conclusions

Although AI offers significant benefits for enhancing diagnostics, supporting interventions, and increasing access to care, its integration into daily life also introduces novel risks for vulnerable individuals, including delusional thinking and paranoia. Therefore, a balanced perspective that acknowledges both the potential advantages and hazards associated with these novel technologies is essential.

Effectively addressing AI psychosis requires urgent, sustained collaboration between mental health professionals and AI researchers to develop ethical, evidence-based strategies that protect AI mental health while responsibly leveraging technological innovations. 

References

  1. Higgins, O., Short, B. L., Chalup, S. K., & Wilson, R. L. (2023). Interpretations of Innovation: The Role of Technology in Explanation Seeking Related to Psychosis. Perspectives in Psychiatric Care, 1, 4464934. DOI:10.1155/2023/4464934, https://onlinelibrary.wiley.com/doi/10.1155/2023/4464934
  2. Østergaard, S. D. (2023). Will Generative Artificial Intelligence Chatbots Generate Delusions in Individuals Prone to Psychosis? Schizophrenia Bulletin 49(6), 1418. DOI:10.1093/schbul/sbad128, https://academic.oup.com/schizophreniabulletin/article/49/6/1418/7251361
  3. Khait, A. A., Mrayyan, M. T., Al-Rjoub, S., Rababa, M., & Al-Rawashdeh, S. (2022). Cyberchondria, Anxiety Sensitivity, Hypochondria, and Internet Addiction: Implications for Mental Health Professionals. Current Psychology, 1. DOI:10.1007/s12144-022-03815-3, https://link.springer.com/article/10.1007/s12144-022-03815-3
  4. Pierre J. M. (2020). Mistrust and misinformation: a two-component, socio-epistemic model of belief in conspiracy theories, Journal of Social and Political Psychology, 8(2):617-641. DOI:10.5964/jspp.v8i2.1362, https://jspp.psychopen.eu/index.php/jspp/article/view/5273
  5. Bruns A., Harrington S., & Hurcombe E. (2020). ‘Corona? 5G? or both?’: the dynamics of COVID-19/5G conspiracy theories on Facebook, Media International Australia 177(1), 12-29. DOI:10.1177/1329878×20946113, https://journals.sagepub.com/doi/10.1177/1329878X20946113
  6. Szmukler, G. (2015). Compulsion and “coercion” in mental health care. World Psychiatry, 14(3), 259. DOI:10.1002/wps.20264, https://onlinelibrary.wiley.com/doi/10.1002/wps.20264
  7. Gaebel, W., & Reed, G. M. (2012). Status of Psychotic Disorders in ICD-11. Schizophrenia Bulletin 38(5), 895. DOI:10.1093/schbul/sbs104, https://academic.oup.com/schizophreniabulletin/article/38/5/895/1902333
  8. Alkhalifah, J. M., Bedaiwi, A. M., Shaikh, N., et al. (2024). Existential anxiety about artificial intelligence (AI)- is it the end of the human era or a new chapter in the human revolution? Questionnaire-based observational study. Frontiers in Psychiatry 15. DOI:10.3389/fpsyt.2024.1368122, https://www.frontiersin.org/articles/10.3389/fpsyt.2024.1368122/full
  9. Melo, A., Romão, J., & Duarte, T. A. (2024). Artificial Intelligence and Schizophrenia: Crossing the Limits of the Human Brain. Edited by Cicek Hocaoglu, New Approaches to the Management and Diagnosis of Schizophrenia. IntechOpen. DOI:10.5772/intechopen.1004805, https://www.intechopen.com/chapters/1185407
  10. Vignapiano, A., Monaoc, F., Panarello, E., et al. (2024). Digital Interventions for the Rehabilitation of First-Episode Psychosis: An Integrated Perspective. Brain Sciences, 15(1), 80. DOI:10.3390/brainsci15010080, https://www.mdpi.com/2076-3425/15/1/80
  11. Thakkar, A., Gupta, A., & Sousa, A. D. (2024). Artificial intelligence in positive mental health: A narrative review. Frontiers in Digital Health 6. DOI:10.3389/fdgth.2024.1280235, https://www.frontiersin.org/articles/10.3389/fdgth.2024.1280235/full
  12. Cao, J., & Liu, Q. (2022). Artificial intelligence-assisted psychosis risk screening in adolescents: Practices and challenges. World Journal of Psychiatry, 12(10), 1287. DOI:10.5498/wjp.v12.i10.1287, https://www.wjgnet.com/2220-3206/full/v12/i10/1287.htm
  13. Pham, T. (2025). Ethical and legal considerations in healthcare AI: Innovation and policy for safe and fair use. Royal Society Open Science 12(5), 241873. DOI:10.1098/rsos.241873, https://royalsocietypublishing.org/doi/10.1098/rsos.241873

Further Reading

Last Updated: Sep 16, 2025



Source link

Continue Reading

Trending