Like Google DeepMind CEO Demis Hassabis, OpenAI CEO Sam Altman is also tossing and turning at night. But unlike Hassabis, who attributes his worries to the possibility of AGI arriving before society is ready, Altman revealed he hasn’t had “a good night of sleep” since ChatGPT launched.
While speaking to former Fox News host Tucker Carlson in a recent interview, OpenAI’s CEO indicated (via CNBC): “Look, I don’t sleep that well at night. There’s a lot of stuff that I feel a lot of weight on, but probably nothing more than the fact that every day, hundreds of millions of people talk to our model.”
The executive further revealed that he doesn’t place too much emphasis or concern on getting big decisions wrong. Instead, he admitted he loses sleep over very small decisions affecting model behaviour, since they tend to have major implications.
Altman revealed that these decisions heavily impact and determine the ethics that inform ChatGPT. This includes the user interaction with the chatbot, and more specifically, the questions and prompts it will respond to or blatantly reject/block.
This follows several recent reports highlighting the complex relationships users are forming with AI-powered tools. As you may recall, Sam Altman noted that users place a surprisingly high level of trust in ChatGPT, despite its tendency to hallucinate. “It should be the tech that you don’t trust that much,” he added.
How is OpenAI addressing ChatGPT’s safety issues?
Over the past few months, reports have surfaced, citing complaints over ChatGPT encouraging users to commit suicide or self-harm. In August, a family sued OpenAI after claims that their 16-year-old son, Adam Raine, took his life after months of encouragement from ChatGPT.
The lawsuit further indicated that the AI firm schemed through GPT-4o’s safety testing processes in a bid to get the product to the public quicker. A separate report seemingly corroborated the lawsuit’s claims, revealing that OpenAI placed pressure on its safety team to rush through the new testing protocol for GPT-4o. Perhaps more concerning, the company reportedly sent invitations to the product’s launch party even before the safety team began running tests on the model.
OpenAI admits that its safeguards are only suitable for quick interactions, as they fall short and become less reliable during long conversations. The company recently published a blog post designed to help address some of these issues and potentially help users going through a rough time by providing them with support when they are at their most vulnerable.
When asked about how OpenAI determines ChatGPT’s ethics and morals, CEO Sam Altman indicated:
“This is a really hard problem. We have a lot of users now, and they come from very different life perspectives… But on the whole, I have been pleasantly surprised with the model’s ability to learn and apply a moral framework.”
The executive indicated that the company is focused on aligning the model to decide which questions it shouldn’t answer with the user’s best interest at heart.
The executive revealed that the company leveraged the services of “hundreds of moral philosophers and people who thought about ethics of technology and systems” to determine the specifications of its models.
Altman admitted that while the company has put elaborate measures in place to mitigate some of these issues, it still requires input from the world to bolster its efforts.