Start-up StatSig Acquires…CEO Appointed CTO “AI Quality Most Important…”Make safe and useful AI”
OpenAI, which leads the global artificial intelligence (AI) market, has invested a large amount of money to acquire startups. Analysts say that this is a move that recognizes that ChatGPT users have recently emerged as a serious social problem after being delusional or killing themselves while talking for a long time.
According to the information technology (IT) industry on the 4th, OpenAI announced the previous day that it would acquire startup StatSig for $1.1 billion (about 1.5 trillion won). This transaction is made through a stock exchange in full.
Founded in 2021, StatSig has a platform that verifies the effectiveness and impact of developers improving software functions. Representatively, it applies a new function to some users and provides a service that modifies the function according to the user’s response after testing and updating the function compared to all functional users.
Vijai Raj StatSig CEO will be appointed as OpenAI’s Chief Technology Officer (CTO) for Applications. It is expected that it will be in charge of application engineering. However, there is still a process for regulatory authorities to review and permit the acquisition.
OpenAI has been pursuing large-scale mergers and acquisitions this year. In July, it bought Johnny Ive’s AI hardware startup io, who served as Apple’s design director, for $6.5 billion (about 9 trillion won), and attempted to acquire AI coding startup Windsurf for $3 billion (about 4 trillion won), but it failed.
“In order to create intuitive, safe and useful Generative AI, a strong engineering system, fast repetitive work, and a long-term focus on quality and stability are needed,” an OpenAI official said. “We will improve our AI model in a way that allows users to better recognize and respond to signals that are mentally and emotionally difficult.”
Controversy over AI psychosis…Introducing ‘Danger Conversation’ Protection
![Sam Altman, CEO of OpenAI. [Photo = Yonhap News]](https://aistoriz.com/wp-content/uploads/2025/09/news-p.v1.20250904.0da19c1328fe496e9a9477def21dfcf7_P1.jpg)
Recently, AI psychosis is the topic of the global AI industry. AI psychosis refers to the phenomenon of losing a sense of reality or imagining in vain while interacting with AI. It’s not an official disease name, it’s a newly coined word.
For example, last month, American teenager Adam Lane confessed to ChatGPT-4o that he felt an urge to choose an extreme, discussed his suicide plan, and put it into action. Lane’s parents filed a lawsuit against OpenAI, claiming ChatGPT was responsible for the death of their son.
OpenAI acknowledged the system defect, saying that long repeated communication has unlocked ChatGPT’s safety devices. In response, OpenAI plans to work with experts in various fields to strengthen ChatGPT’s user protection function, and then introduce an AI model that focuses on a safe use environment within this year.
First, to block sensitive and dangerous conversations, we take measures to automatically switch from the general model to the inference model when a stress signal is detected. The inference model can adequately respond to anomalies because it takes sufficient time to understand the context and answer than the general model.
In particular, it focuses on protecting youth. You can link the accounts of parents and children. Through this, parents have the authority to check their children’s conversation content and delete conversation records. If your child seems emotionally anxious, you will send a warning notification to your parents. Age-specific rules of conduct are also applied.
Meta also held a roundtable on online safety for youth and women. Currently, Meta is reflecting user feedback in its service after the introduction of youth accounts. A location notification function has been added to the direct message (DM) to indicate the other party’s country. It aims to prevent sexual exploitation and fraud. In order to prevent the spread of private photos, it has introduced a function to send a warning message and automatically blur when nude photos are detected. It is also detecting AI technology advertisements that synthesize and distort human photos.
Meanwhile, as AI becomes a part of life, demands from AI companies to protect ethics are expected to spread. According to WiseApp and Retail, the monthly active users (MAU) of Korea’s ChatGPT app exceeded 20.31 million as of last month. It increased five times compared to the same month last year (4.07 million). The figure is even nearly half of the KakaoTalk app (51.2 million people), a messenger used by the whole nation.
“AI does not recognize emotions, but it can learn conversation patterns and react as if it had emotions,” said Dr. Zeev Benzion of Yale University. “It should be designed to remind users that AI is not a therapist or a substitute for human relationships.”