Artificial General Intelligence (AGI) appears to be the ultimate goal for most AI labs, including Google, Anthropic, and OpenAI, as they invest billions into the rapidly evolving landscape of cloud computing, GPUs, and other critical infrastructure.
Over the past few months, multiple reports have emerged suggesting that some of these companies are on the verge of achieving the coveted benchmark. However, the term has seemingly turned into a buzzword thrown around by executives in the space with a different meaning every time it’s mentioned in a conversation.
For instance, the most common description of the term brands it as a sophisticated AI system that surpasses human cognitive capabilities. However, Microsoft’s multibillion-dollar partnership agreement with OpenAI refers to AGI as an AI system with the capability of generating up to $100 billion in profit.
In May, Google DeepMind CEO Demis Hassabis claimed AGI was on its way. However, the executive expressed concerns that society isn’t prepared to handle all that it entails. He further revealed that the prospects keep him up at night.
But as it now seems, Hassabis might have had a change of heart. While speaking at All-In Summit last week, the Google DeepMind CEO indicated that rivals touting modern AI systems to have “PhD intelligences” is nonsense (via vitrupo on X).
The executive raised an important argument, indicating that AI-powered chatbots are prone to generating misleading and outrightly wrong responses to queries when questions are asked in a certain way.
This reiterates the importance of high-quality prompt engineering skills when interacting with these tools. As you may remember, a report revealed that one of the top complaints to Microsoft’s AI division last year was that Copilot isn’t as good as ChatGPT.
While Microsoft was quick to shift the blame to a lack of proper prompt engineering skills, explicitly suggesting that consumers weren’t using the tool as intended, the tech giant has since launched Copilot Academy to help users learn how to make the most of it.
You often hear some of our competitors talk about these modern systems we have today as “PhD intelligences.” I think that’s nonsense. They are not PhD intelligences. They have some capabilities that are PhD-level, but they are not generally capable, and that’s exactly what general intelligence should be: performing across the board at the PhD level.
Google DeepMind CEO, Demis Hassabis
Demis Hassabis says a true AGI system shouldn’t be susceptible to making such glaring mistakes. As such, the executive predicts that the world could be anywhere between 5 to 10 years from achieving true AGI.
He also indicated that critical components are missing, such as continued learning, which allows AI-powered systems to become smarter through training on online content. Without this capability, chatbots are unable to learn anything new or use fresh information to reinforce or modify certain behaviors.
Hassabis argues that today’s AI systems still lack core capabilities, but suggests that scaling efforts might help bypass some of these limitations. This comes amid reports that top AI labs like OpenAI and Anthropic have hit a wall due to a shortage of high-quality training data, making it increasingly difficult to develop more advanced AI systems.