Connect with us

Tools & Platforms

AI psychosis sees users ‘co-creating delusions with technology’ – expert

Published

on


Clinicians and researchers should remain abreast of developments in artificial intelligence (AI) as the emergence of ‘AI psychosis’ marks the first time people are creating delusions together with technology, an expert has outlined.

Speaking on an episode of GlobalData’s Instant Insights podcast, Dr. Hamilton Morrin, a psychiatrist and a doctoral fellow at King’s College London, said: “This technology is developing a quite a rapid pace. In some areas, it hasn’t necessarily fully delivered yet on the promises made, but that doesn’t change the fact that it is very much different, and, from what we’ve seen before and in the past where you could describe people as having delusions about technology, now, for the very first time, people are having delusions with technology; this co-creation of delusional beliefs for a kind of echo-chamber-of-one effect,  or some people have even used the term a ‘digital folie a deux’.

Discover B2B Marketing That Performs

Combine business intelligence and editorial excellence to reach engaged professionals across 36 leading media platforms.


Find out more

“So, it really is, I think, important, certainly, for clinicians and researchers to remain abreast of developments in this area and understand how people are using them.”

Morrin was appearing on the podcast after so-called AI psychosis was in the news last week, when Mustafa Suleyman, the CEO of Microsoft’s consumer AI division, raised concerns about the growing number of cases being reported.

Suleyman wrote: “I’m growing more and more concerned about what is becoming known as the ‘psychosis risk’. and a bunch of related issues. I don’t think this will be limited to those who are already at risk of mental health issues. Simply put, my central worry is that many people will start to believe in the illusion of AIs as conscious entities so strongly that they’ll soon advocate for AI rights, model welfare and even AI citizenship. This development will be a dangerous turn in AI progress and deserves our immediate attention.”

Referencing Suleyman’s article, Morrin, who co-authored the recent paper Delusions by design? How everyday AIs might be fuelling psychosis (and what can be done about it), noted that, while we may not yet be at the point of AI being seemingly conscious, depending on what measure is used, for many people, that is not the case.

“Many people already feel that they’re interacting with something conscious,” he said. “And, even if that’s not the case, if they feel that way, that’s going to have an impact on their mental state and their emotional dependence.”

Dr. Hamilton Morrin.

Presentation of AI psychosis

Morrin was quick to note that the term ‘AI psychosis’ may be a misnomer, with observations of it so far really only indicating delusions – “fixed, firm, false beliefs”. Psychosis, meanwhile, is a broader syndrome that can include delusions, but also hallucinations, schizophrenia, mood disorder and other mental and medical conditions.

“In these cases, at least from what we could see, from what was reported anecdotally, we only really saw evidence of delusions – none of the other kind of hallmark symptoms that you might see in a more classic psychotic disorder,” he explained. “Specifically, the delusions we saw had three main flavours, one of which was people believing that they’d had an awakening of sorts to the true nature of reality in a metaphysical sense. Another theme was people believing that they had formed contact with a sentient, powerful, all-knowing artificial intelligence. And the third theme was that of people developing intense emotional bonds and attachments with the AI chat bot in question.”

In addition to noting that there has yet to be an comprehensive study of AI psychosis, Morrin made clear that cases do not appear to be especially widespread. He added, though, that such is the rapid advancement of the technology and the new territory being charted, it is a cause for concern.

“I want to emphasise we don’t know necessarily how common this issue is,” he said. “And I want to emphasise that if this was something causing psychosis out of nowhere, completely, you know, de novo psychosis, we’d be seeing massive increases in presentations to any departments across the country and worldwide. I can say, at least for now, that certainly isn’t the case, so we’re not dealing with this new epidemic.

“But, given what we know about just how debilitating psychosis can be and how life destroying it can be for the person suffering it and those around them, it’s certainly something that companies should take notice of and do as much as possible to address – and even outside of the realm of psychosis, the this issue of emotional dependence as well is a growing matter that merits attention and consideration in terms of safeguards and collaboration with experts in the field.”

AI psychosis safeguards

Of potential safeguards that could be put in place, he commented: “There was a short piece in Nature where four safeguards were proposed by Ben-Zion, and these included the fact that AI should continually reaffirm its non-human status, that chatbots should flag patterns of language in prompts indicative of psychological distress, that there should be conversational boundaries (i.e. no emotional intimacy or discussion of certain risky topics like suicide) and that AI platforms must start involving clinicians, ethicists and human AI specialists in auditing emotionally responsive AI systems for unsafe behaviours.

“Beyond that, we suggest that there may also be some safeguards that would include limiting the types of personal information that one can share in order to protect privacy. Companies communicating clear and transparent guidelines for acceptable behaviour and use and provision of accessible tools for users to report concerns with prompt and responsive follow-up to ensure trust and accountability.”

Morrin added: “I think it’s incumbent on us as clinicians and researchers to also meet people where they’re at and try to help them on a day to day basis if they are using these models, and so we propose that all clinicians should have a decent understanding of current LLMs [large-language models] and how they’re used, that they should be comfortable to ask their patients how much they use them and for what in what capacities they do use them.”


Medical Device Network Excellence Awards – The Benefits of Entering

Gain the recognition you deserve! The Medical Device Network Excellence Awards celebrate innovation, leadership, and impact. By entering, you showcase your achievements, elevate your industry profile, and position yourself among top leaders driving medical devices advancements. Don’t miss your chance to stand out—submit your entry today!

Nominate Now






Source link

Tools & Platforms

11 companies in Czechia that are harnessing the power of AI

Published

on


For many businesses, the challenge isn’t a lack of information but finding the right detail in a sea of documents. Prague-based Phi Technologies addresses this with PhiBox, an AI-powered platform that scans, interprets, and searches vast collections of paperwork to deliver precise answers in seconds. Powered by optical character recognition, PhiBox goes beyond keyword searches to extract text, read graphs, and interpret visual data.

It already works in Czech, English, and German, making it useful for multinational teams. From law firms handling discovery documents to corporations managing compliance records, PhiBox turns administrative drudgery into accessible, actionable knowledge — showing how AI can transform information overload into real business value.



Source link

Continue Reading

Tools & Platforms

Indonesia Drafts Stricter AI Rules Amid Rising Deepfake Concerns

Published

on


TLDRs;

  • Indonesia drafts stricter AI regulations, targeting deepfakes as Nezar Patria urges platforms to provide free detection tools.
  • Deepfake content has surged 550% in five years, raising alarms over misinformation and digital safety.
  • Jakarta’s policies align with global efforts, following China’s watermarking rules and EU’s proposed AI transparency laws.
  • The detection battle remains tough, as deepfake creation tools advance faster than available verification technology.

Indonesia is intensifying its push for tighter artificial intelligence (AI) regulation as concerns over deepfakes continue to mount.

At an event in Jakarta on September 10, Nezar Patria, the country’s deputy minister for communications and digital, called on major technology companies to provide free tools that help users identify AI-generated content.

Patria pointed to research from Sensity AI showing a staggering 550% rise in deepfake content over the past five years. He warned that the actual scale could be much larger, given the rapid accessibility of generative AI tools. According to him, while the technology behind deepfakes is advancing rapidly, ordinary users lack the resources to verify what they see online.

Tech Giants Called to Step Up

The Indonesian government believes major platforms such as Google, Meta, and others already have the algorithms and computational capacity to deploy large-scale detection systems. What is missing, Patria argued, is public access to these tools.

“Detection capabilities shouldn’t be locked away behind private walls,” Patria emphasized, suggesting that transparency tools must be integrated into the platforms millions rely on daily.

By offering detection features for free, companies could help users spot hoaxes, misinformation, and manipulated videos before they spread widely.

Indonesia Aligns With Global AI Regulation

Indonesia’s move mirrors broader international efforts to confront the deepfake challenge. China already requires watermarks on AI-generated content, while the European Union has proposed new laws mandating clear labeling and transparency for synthetic media.

Data shows that more than 69 countries have introduced over 1,000 AI-related policy proposals, many aimed at reducing risks associated with misinformation and harmful synthetic content. Jakarta’s approach signals that Southeast Asia’s largest economy intends to play an active role in shaping ethical AI use, not just within its borders but as part of a global movement.



Indonesia already enforces digital safety measures through the ITE Law and the PDP Law. The government is now drafting a new set of rules specifically focused on ethical and responsible AI deployment, positioning itself among nations prioritizing both innovation and public protection.

A Race Between Creation and Detection

Despite the urgency, experts note that detection technologies face an uphill battle. The development of generative adversarial networks (GANs) has made creating realistic deepfakes faster and cheaper than ever. In contrast, detection systems must constantly evolve to keep pace with new manipulation techniques.

Even institutions like the U.S. Defense Advanced Research Projects Agency (DARPA) are investing heavily in deepfake detection, underscoring the scale of the technical challenge. Indonesia’s demand for free tools is therefore not only about user empowerment but also about bridging a critical accessibility gap.

As the world witnesses more governments demanding transparency in AI, Indonesia’s regulatory push adds weight to the argument that AI innovation must be balanced with safeguards against misuse. For now, the success of these measures will depend on how tech giants respond, and whether they are willing to place public safety ahead of commercial advantage.

 



Source link

Continue Reading

Tools & Platforms

MBZUAI and G42 Launch K2 Think: Compact AI Model Redefining Advanced Reasoning

Published

on


K2 Think embodies a new approach to building smarter, more efficient AI. With just 32 billion parameters, it outperforms flagship reasoning models that are 20X larger.

The Institute of Foundation Models at Mohamed bin Zayed University of Artificial Intelligence (MBZUAI ) and G42  have announced the launch of K2 Think, a leading open-source system for advanced AI reasoning.

K2 Think embodies a new approach to building smarter, more efficient AI. With just 32 billion parameters, it outperforms flagship reasoning models that are 20X larger. This breakthrough in parameter efficiency makes K2 Think a powerful alternative for advanced reasoning, redefining what is possible with compact architectures.

Built on six pillars of innovation, K2 Think represents a new class of reasoning model. It employs long chain-of-thought supervised fine-tuning to strengthen logical depth, followed by reinforcement learning with verifiable rewards to sharpen accuracy on hard problems. Agentic planning allows the model to decompose complex challenges before reasoning through them, while test-time scaling techniques further boost adaptability.  In addition, K2 Think will soon be available on Cerebras’ wafer-scale, inference-optimized compute platform, enabling researchers and innovators worldwide to push the boundaries of reasoning performance at lightning-fast speed. With speculative decoding optimized for Cerebras hardware, K2 Think will achieve unprecedented throughput of 2,000 tokens per second, making it both one of the fastest and most efficient reasoning systems in existence.

K2 Think ranks among the industry’s top reasoning systems, leading all open-source models in math performance across AIME ’24/’25, HMMT ’25, and OMNI-Math-HARD.

More than a technical achievement, K2 Think is a defining moment for AI in the UAE. It reflects how open innovation and close public–private partnerships can position Abu Dhabi as a global leader in AI, demonstrating that the future of reasoning will be shaped not only by size, but by ingenuity and collaboration.

“The new global benchmark set by K2 Think underscores the pioneering excellence of MBZUAI’s Institute of Foundation Models initiative, an expedited pathway for global collaboration and cutting-edge research. It is also an example of the UAE’s commitment to building advanced systems that are developed by our institutions and shared with the world – ultimately progressing technically groundbreaking, practical, and scalable innovations with transformative global impact.”

-His Excellency Khaldoon Khalifa Al Mubarak, Chairman of MBZUAI’s Board of Trustees and Member of the Artificial Intelligence and Advanced Technology Council (AIATC) 

“K2 Think has shifted the AI reasoning paradigm from ‘bigger is better’ to ‘smarter is better’. MBZUAI, supported by the UAE ecosystem, is pushing the AI frontier with technology that is open, efficient and highly capable. By proving that smaller, more resourceful models can rival the largest reasoning systems, this milestone marks the beginning of the next wave of AI innovation.”

-Peng Xiao, MBZUAI Board Member, Council Member of Abu Dhabi’s AI and Advanced Technology Council, and Group CEO, G42 

Unlike most “open” AI models that stop at releasing weights, K2 Think is fully open source — from training data and parameter weights to software code for deployment and test-time optimization. This new level of transparency ensures that every step of how the model learns to reason can be studied, reproduced, and extended by the global research community.

 “K2 Think, developed by MBZUAI’s Institute of Foundation Models, is a significant advancement for the global AI research and development community. By delivering these advances in a fully transparent framework, we are ushering in a new era of cost-effective, reproducible and accountable AI. For an institution just five years young, we are immensely proud of our global researchers, engineers, and teams who are advancing science and technology with ingenuity and a pioneering spirit.”

-Professor Eric Xing, MBZUAI President and University Professor 

K2 Think builds on a growing family of UAE-developed open-source models, including Jais (the world’s most advanced Arabic LLM), NANDA (Hindi), and SHERKALA (Kazakh), and extends the pioneering legacy of K2-65B, the world’s first fully reproducible open-source foundation model released in 2024.

K2 Think is available today at k2think.ai  and on Hugging Face .



Source link

Continue Reading

Trending