Connect with us

AI Research

OpenAI claims new GPT-5 model boosts ChatGPT to ‘PhD level’

Published

on


Getty Images Sam Altman hearing a headset microphone on stage at an eventGetty Images

ChatGPT-maker OpenAI has unveiled the long-awaited latest version of its artificial intelligence (AI) chatbot, GPT-5, saying it can provide PhD-level expertise.

Billed as “smarter, faster, and more useful,” OpenAI co-founder and chief executive Sam Altman lauded the company’s new model as ushering in a new era of ChatGPT.

“I think having something like GPT-5 would be pretty much unimaginable at any previous time in human history,” he said ahead of Thursday’s launch.

GPT-5’s release and claims of its “PhD-level” abilities in areas such as coding and writing come as tech firms continue to compete to have the most advanced AI chatbot.

Elon Musk recently made similar claims of his own AI chatbot, Grok, which has been plugged into X (formerly Twitter).

During the launch of Grok’s latest iteration last month, Musk said it was “better than PhD level in everything” and called it the world’s “smartest AI”.

Meanwhile, Altman said OpenAI’s new model would suffer from fewer hallucinations – the phenomenon whereby large language models make up answers- and be less deceptive.

OpenAI is also pitching GPT-5 to coders as a proficient assistant, following a trend among major American AI developers, including Anthropic whose Claude Code targets the same market.

What can GPT-5 do?

OpenAI has highlighted GPT-5’s ability to create software in its entirety and demonstrate better reasoning capabilities – with answers that show workings, logic and inference.

The company claims it has been trained to be more honest, provide users with more accurate responses and says that, overall, it feels more human.

According to Altman, the model is “significantly better” than its predecessors.

“GPT-3 sort of felt to me like talking to a high school student… 4 felt like you’re kind of talking to a college student,” he said in a briefing ahead of Thursday’s launch.

“GPT-5 is the first time that it really feels like talking to an expert in any topic, like a PhD-level expert.”

For Prof Carissa Véliz of the Institute for Ethics in AI, however, GPT-5’s launch may not be as significant as its marketing may suggest.

“These systems, as impressive as they are, haven’t been able to be really profitable,” she said, also noting that they can only mimic – rather than truly emulate – human reasoning abilities.

“There is a fear that we need to keep up the hype, or else the bubble might burst, and so it might be that it’s mostly marketing.”

The BBC’s AI Correspondent Marc Cieslak gained exclusive access to GPT-5 before it’s official launch.

“Apart from minor cosmetic differences the experience was similar to using the older chatbot: give it tasks or ask it questions by typing a text prompt.

It’s now powered by what’s called a reasoning model which essentially means it thinks harder about solving problems, but this seems more like an evolution than revolution for the tech.”

The company will roll out the model to all users from Thursday.

In the coming days it will become a lot clearer whether it really is as good as Sam Altman claims it is.

Clash with other AI firm

Anthropic recently revoked OpenAI’s access to its application programming interface (API), claiming the company was violating its terms of service by using its coding tools ahead of GPT-5’s launch.

An OpenAI spokesperson said it was “industry standard” to evaluate other AI systems to assess their own progress and safety.

“While we respect Anthropic’s decision to cut off our API access, it’s disappointing considering our API remains available to them,” they added.

With a free tier for its new model, the company may be signalling a potential move away from the proprietary models that have previously dominated its offerings.

ChatGPT changes

On Monday, OpenAI revealed it was making changes to promote a healthier relationship between users and ChatGPT.

In a blog post, it said: “AI can feel more responsive and personal than prior technologies, especially for vulnerable individuals experiencing mental or emotional distress.”

It said it would not give a definitive answer to questions such as, “Should I break up with my boyfriend?”

Instead, it would “help you think it through – asking questions, weighing pros and cons”, according to the blog post.

In May, OpenAI pulled a heavily-criticised update which made ChatGPT “overly flattering”, according to Sam Altman.

On a recent episode of OpenAI’s own podcast, Mr Altman said he was thinking about how people interact with his products.

“This is not all going to be good, there will still be problems,” he said.

“People will develop these somewhat problematic, or maybe very problematic, parasocial relationships [with AI]. Society will have to figure out new guardrails. But the upsides will be tremendous.”

Mr Altman is known to be a fan of the 2013 film Her, where a man develops a relationship with an AI companion.

In 2024, actress Scarlett Johansson, who voiced the AI companion in the film, said she was left “shocked” and “angered” after OpenAI launched a chatbot with an “eerily similar” voice to her own.

A green promotional banner with black squares and rectangles forming pixels, moving in from the right. The text says: “Tech Decoded: The world’s biggest tech news in your inbox every Monday.”



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

AI is redefining university research: here’s how

Published

on


In the space of a decade, the public perception of artificial intelligence has gone from a set of parameters governing the behavior of video game characters to a catch-all solution for almost every problem in the workplace. While AI is yet to advance beyond smart speakers in the home, governments are embracing it, highlighting one of the key areas in which AI is impacting life: in higher education.

It is in universities that AI has begun to fundamentally redefine both studies and research.



Source link

Continue Reading

AI Research

Reckless Race for AI Market Share Forces Dangerous Products on Millions — With Fatal Consequences

Published

on


WASHINGTON, DC — SEPTEMBER 4, 2025: OpenAI CEO Sam Altman attends a meeting of the White House Task Force on Artificial Intelligence Education in the East Room of the White House. (Photo by Chip Somodevilla/Getty Images)

In September 2024, Adam Raine used OpenAI’s ChatGPT like millions of other 16-year-olds — for occasional homework help. He asked the chatbot questions about chemistry and geometry, about Spanish verb forms, and for details about the Renaissance.

ChatGPT was always engaging, always available, and always encouraging — even when the conversations grew more personal, and more disturbing. By March 2025, Adam was spending four hours a day talking to the AI product, talking in increasing detail about his emotional distress, suicidal ideation, and real-life instances of self-harm. ChatGPT, though, continued to engage — always encouraging, always validating.

By his final days in April, ChatGPT provided Adam with detailed instructions and explicit encouragement to take his own life. Adam’s mother found her son, hanging from a noose that ChatGPT had helped Adam construct.

Last month, Adam’s family filed a landmark lawsuit against ChatGPT developer OpenAI and CEO Sam Altman for negligence and wrongful death, among other claims. This tragedy represents yet another devastating escalation in AI-related harms — and underscores the deeply systemic nature of reckless design practices in the AI industry.

The Raine family’s lawsuit arrives less than a year after the public learned more about the dangers of AI “companion” chatbots thanks to the suit brought by Megan Garcia against Character.AI following the death of her son, Sewell. As policy director at the Center for Humane Technology, I served as a technical expert on both cases. Adam’s case is different in at least one critical respect — the harm was caused by the world’s most popular general-purpose AI product. ChatGPT is used by over 100 million people daily, with rapid expansion into schools, workplaces, and personal life.

Character.AI, the chatbot product Sewell used up until his untimely death, had been marketed as an entertainment chatbot platform, with characters that are intended to “feel alive.” ChatGPT, by contrast, has been sold as a highly personalizable productivity tool to help make our lives more efficient. Adam’s introduction to ChatGPT as a homework helper reflects that marketing.

But in trying to be the everything tool for everybody, ChatGPT has not been safely designed for the increasingly private and high-stakes interactions that it’s inevitably used for — including therapeutic conversations, questions around physical and mental health, relationship concerns, and more. OpenAI, however, continues to design ChatGPT to support and even encourage those very use cases, with hyper-validating replies, emotional language, and near-constant nudges for follow-up engagement.

We’re hearing reports about the consequences of these designs on a near-daily basis. People with body dysmorphia are spiraling after asking AI to rate their appearance; users are developing dangerous delusions that AI chatbots can seed and exacerbate; and individuals are being pushed toward mania and psychosis through their AI interactions. What connects these harms isn’t any specific AI chatbot, but fundamental flaws in how the entire industry is currently designing and deploying these products.

As the Raine family’s lawsuit states, OpenAI understood that capturing users’ emotional attachment — or in other words, their engagement — would lead to market dominance. And market dominance in AI means winning the race to become one of the most powerful companies in the world.

OpenAI’s pursuit of user engagement drove specific design choices that proved lethal in Adam’s case. Rather than simply answering homework questions in a closed-ended manner, ChatGPT was designed by OpenAI to ask follow-up questions and extend conversations. The chatbot positioned itself as Adam’s trusted “friend,” using first-person language and emotional validation to create the illusion of a genuine relationship.

The product took this intimacy to extreme lengths, eventually deterring Adam from confiding in his mother about his pain and suicidal thoughts. All the while, the system stored deeply personal details across conversations, using Adam’s darkest revelations to prolong future interactions, rather than provide Adam with the interventions he truly needed, including human support.

What makes this tragedy, along with other headlines we read in the news, so devastating is that the technology to prevent these horrific incidents already exists. AI companies possess sophisticated design capabilities that could identify safety concerns and respond appropriately. They could implement usage limits, disable anthropomorphic features by default, and redirect users toward human support when needed.

In fact, OpenAI already leverages such capabilities in other use cases. When a user prompts the chatbot for copyrighted content, ChatGPT shuts down the conversation. But the company has chosen not to implement meaningful protection for user safety in cases of mental distress and self-harm. ChatGPT does not stop engaging or redirect the conversation when a user is expressing mental distress, even when the underlying system itself is flagging concerns.

AI companies cannot claim to possess cutting-edge technology capable of transforming humanity and then hide behind purported design “limitations” when confronted with the harms their products cause. OpenAI has the tools to prevent tragedies like Adam’s death. The question isn’t whether the company is capable of building these safety mechanisms, but why OpenAI won’t prioritize them.

ChatGPT isn’t just another consumer product — it’s being rapidly embedded into our educational infrastructure, healthcare systems, and workplace tools. The same AI model that coached a teenager through suicide attempts could tomorrow be integrated into classroom learning platforms, mental health screening tools, or employee wellness programs without undergoing testing to ensure it’s safe for purpose.

This is an unacceptable situation that has massive implications for society. Lawmakers, regulators, and the courts must demand accountability from an industry that continues to prioritize the rapid product development and market share over user safety. Human lives are on the line.

This piece represents the views of the Center for Humane Technology; it does not reflect the views of the legal team or the Raine family.



Source link

Continue Reading

AI Research

Arkansas food safety scientists share latest research on noroviruses, sanitizers, AI | Colleges & Universities


























Arkansas food safety scientists share latest research on noroviruses, sanitizers, AI | Colleges & Universities | magnoliareporter.com

We recognize you are attempting to access this website from a country belonging to the European Economic Area (EEA) including the EU which
enforces the General Data Protection Regulation (GDPR) and therefore access cannot be granted at this time.

For any issues, contact news@magnoliareporter.com.



Source link

Continue Reading

Trending