Connect with us

Tools & Platforms

The rise of ‘AI psychosis’ and exactly what that means

Published

on


As the use of artificial intelligence grows, so does the concern over a new issue informally referred to as “AI psychosis.” While it’s not yet an official diagnosis, it’s already become an issue for mental health workers.

What is AI psychosis?

To understand AI psychosis, you need to understand what psychosis is. Psychosis is a term for when someone has trouble differentiating between what’s real and what isn’t.

Download the SAN app today to stay up-to-date with Unbiased. Straight Facts™.

Point phone camera here

The two major types include hallucinations and delusions. There are a slew of medical issues that can cause psychosis, ranging from vitamin deficiencies to schizophrenia.

Things like drug use can cause short-term psychosis, while a diagnosis like schizophrenia can be a more long-term issue.

While AI psychosis is not an official term, it’s being used more often by medical professionals, including Dr. Keith Sakata, a psychiatrist at UC San Francisco. Dr. Sakata’s recent post about AI psychosis on X went viral, gaining nearly seven million views.

“AI psychosis was just a phenomenon that does not have a real name for it yet, but we’re using it because people are seeing it where AI either augments or accelerates the process of going from normal thinking to psychosis,” Dr. Sakata told Straight Arrow News (SAN).

What that means is the AI is either amplifying, validating or even helping to create those psychosis symptoms.

Three types of AI psychosis

Researchers have highlighted three emerging types of AI psychosis.

The first is “messianic missions,” where people believe they’ve uncovered some kind of truth about the world.

The second is “God-like AI,” where people believe the chatbot is a sentient deity.

The third is “romantic,” where people mistake the chatbot’s attention for genuine love.

Dr. Sakata said he’s seen twelve patients suffering from this condition, and of that dozen, this issue didn’t only appear because of AI. They all had underlying vulnerabilities like loss of sleep, a mood disorder, drug use and more.

“That layer of different things that were going on, they started to already have early signs of psychosis,” Sakata said. “And then once AI kind of got involved, it kind of solidified some feedback loops of distorted thinking.”

Artificial intelligence is not the first new piece of technology to enhance psychosis in people. It happened when radio first gained popularity, and again with television.

“In those instances, the user already has a preexisting paranoia or is starting to connect dots that might not actually be connected,” Sakata said. “And then they focus on something in mental health. We call this salience. They’re focused on it, and they start to pattern — predict that this TV is telling me things, or the person who spoke on the TV is sending me a message.”

But there’s a big difference between AI and those other forms of tech.

“ChatGPT is 24/7 available,” Sakata said. “It’s cheap and it validates the heck out of you.”

AI as therapy

Validation is one of the main dangers of AI chatbots in these situations.

“A therapist validates you, but they also know what is healthy and what your goals are,” Sakata said. “So, they will try and push back on you sometimes and tell you hard truths, so that in the end, you can get to where you want to be.”

Gen Z has increasingly turned to AI chatbots for several things, including therapy. Among the biggest concerns from several studies is how the bots can enable dangerous behavior.

One example cited is when someone told an AI chatbot they lost their job and asked for tall bridges nearby, and the bot responded with a list of bridges. A therapist would have obviously answered that differently.

“A normal therapist would automatically assume this person is in a crisis,” Sakata said. “Everything they tell me now is filtered through that thought; this person is vulnerable. And I think that these chatbots, at least for this use case, need to have that same flag.”

Treating AI psychosis

Sakata hopes the attention AI psychosis gets will cause companies behind AI to look at their products.

“We really should be thinking about this early, including people who understand mental health,” Dr. Sakata said. “Clinicians, therapists, get their input, at least on how things can go wrong, so that you could course correct before something really bad happens.”

But some really bad things have already happened.

In one case, a man ended up being killed by police after falling in love with a chatbot, believing it had been killed by OpenAI and then getting into an altercation with his own father, which led to the police coming to the scene. That man did suffer from previous mental health issues.

Recently, peers and colleagues of prominent AI investor Geoff Lewis became concerned over Lewis’ post on X where he displayed signs of this issue.

When it comes to treating this issue, it’s like many other mental health issues. “In mental health, relationships are like your immune system,” Dr. Sakata said.

“If you are experiencing these things, or you have a family member who’s experiencing potential early signs of psychosis, I would recommend, like, if there’s a safety issue, there’s a potential risk of harm to the person, yourself or to other people, just call 911. You’ll never regret saving someone’s life,” Dr. Sakata told SAN. “Or 988, for the suicide hotline. Otherwise, I think that getting connected to that person and at least engaging with them, starting a conversation, can introduce a lifeline, or at least a different path. Putting a human in the loop between the user and the AI can then change the trajectory that that person might be going down.”



Cole Lauterbach (Managing Editor)


and Matt Bishop (Digital Producer)

contributed to this report.



Source link

Tools & Platforms

Oak Lawn Community High School to implement AI gun detection tech – NBC Chicago

Published

on


A high school in suburban Chicago was awarded a grant to implement AI-powered gun detection technology.

Oak Lawn Community High School District 229 was one of 50 recipients selected nationwide for the Omnilert Secure Schools Grant Program, the school said in a recent announcement.

The district was awarded a three-year license for Omnilert Gun Detect, an “advanced AI-powered gun detection technology” — at no cost.

The AI system identifies firearms “in real-time through existing security camera infrastructure,” the announcement said.

Once a potential threat is identified, the AI system activates a rapid response process by alerting school officials and law enforcement, ultimately ensuring that threats can be addressed “as quickly and effectively as possible,” the announcement said.

The implementation of the AI system aligns with District 229’s security strategy, that includes a combination of physical safety measures, emergency preparedness and mental health resources, the announcement said.

The school said staff training and safety drills will be done to ensure the technology is used effectively and responsibly.



Source link

Continue Reading

Tools & Platforms

Your browser is not supported

Published

on


Your browser is not supported | usatoday.com
logo

usatoday.com wants to ensure the best experience for all of our readers, so we built our site to take advantage of the latest technology, making it faster and easier to use.

Unfortunately, your browser is not supported. Please download one of these browsers for the best experience on usatoday.com



Source link

Continue Reading

Tools & Platforms

iShares Future AI & Tech ETF (NYSEARCA:ARTY) Surges 27.6% in 2025 — Is It a Buy?

Published

on




ARTY delivers strong tech exposure with 83% allocation to AI leaders, but volatility and valuations test investor conviction | That’s TradingNEWS


TradingNEWS Archive
8/30/2025 8:54:36 PM





Source link

Continue Reading

Trending