A significant step forward but not a leap over the finish line. That was how Sam Altman, chief executive of OpenAI, described the latest upgrade to ChatGPT this week.
The race Altman was referring to was artificial general intelligence (AGI), a theoretical state of AI where, by OpenAI’s definition, a highly autonomous system is able to do a human’s job.
Describing the new GPT-5 model, which will power ChatGPT, as a “significant step on the path to AGI”, he nonetheless added a hefty caveat.
“[It is] missing something quite important, many things quite important,” said Altman, such as the model’s inability to “continuously learn” even after its launch. In other words, these systems are impressive but they have yet to crack the autonomy that would allow them to do a full-time job.
OpenAI’s competitors, also flush with billions of dollars to lavish on the same goal, are straining for the tape too. Last month, Mark Zuckerberg, chief executive of Facebook parent Meta, said development of superintelligence – another theoretical state of AI where a system far exceeds human cognitive abilities – is “now in sight”.
Google’s AI unit on Tuesday outlined its next step to AGI by announcing an unreleased model that trains AIs to interact with a convincing simulation of the real world, while Anthropic, another company making significant advances, announced an upgrade to its Claude Opus 4 model.
So where does this leave the race to AGI and superintelligence?
Benedict Evans, a tech analyst, says the race towards a theoretical state of AI is taking place against a backdrop of scientific uncertainty – despite the intellectual and financial investment in the quest.
Describing AGI as a “thought experiment as much as it is a technology”, he says: “We don’t really have a theoretical model of why generative AI models work so well and what would have to happen for them to get to this state of AGI.”
He adds: “It’s like saying ‘we’re building the Apollo programme but we don’t actually know how gravity works or how far away the moon is, or how a rocket works, but if we keep on making the rocket bigger maybe we’ll get there’.
“To use the term of the moment, it’s very vibes based. All of these AI scientists are really just telling us what their personal vibes are on whether we’ll reach this theoretical state – but they don’t know. And that’s what sensible experts say too.”
However, Aaron Rosenberg, a partner at venture capital firm Radical Ventures – whose investments include leading AI firm Cohere – and former head of strategy and operations at Google’s AI unit DeepMind, says a more limited definition of AGI could be achieved around the end of the decade.
“If you define AGI more narrowly as at least 80th percentile human-level performance in 80% of economically relevant digital tasks, then I think that’s within reach in the next five years,” he says.
Matt Murphy, a partner at VC firm Menlo Ventures, says the definition of AGI is a “moving target”.
He adds: “I’d say the race will continue to play out for years to come and that definition will keep evolving and the bar being raised.”
Even without AGI, the generative AI systems in circulation are making money. The New York Times reported this month that OpenAI’s annual recurring revenue has reached $13bn (£10bn), up from $10bn earlier in the summer, and could pass $20bn by the year end. Meanwhile, OpenAI is reportedly in talks about a sale of shares held by current and former employees that would value it at about $500bn, exceeding the price tag for Elon Musk’s SpaceX.
Some experts view statements about superintelligent systems as creating unrealistic expectations, while distracting from more immediate concerns such as making sure that systems being deployed now are reliable, transparent and free of bias.
“The rush to claim ‘superintelligence’ among the major tech companies reflects more about competitive positioning than actual technical breakthroughs,” says David Bader, director of the institute for data science at the New Jersey Institute of Technology.
“We need to distinguish between genuine advances and marketing narratives designed to attract talent and investment. From a technical standpoint, we’re seeing impressive improvements in specific capabilities – better reasoning, more sophisticated planning, enhanced multimodal understanding.
“But superintelligence, properly defined, would represent systems that exceed human performance across virtually all cognitive domains. We’re nowhere near that threshold.”
Nonetheless, the major US tech firms will keep trying to build systems that match or exceed human intelligence at most tasks. Google’s parent Alphabet, Meta, Microsoft and Amazon alone will spend nearly $400bn this year on AI, according to the Wall Street Journal, comfortably more than EU members’ defence spend.
Rosenberg acknowledges he is a former Google DeepMind employee but says the company has big advantages in data, hardware, infrastructure and an array of products to hone the technology, from search to maps and YouTube. But advantages can be slim.
“On the frontier, as soon as an innovation emerges, everyone else is quick to adopt it. It’s hard to gain a huge gap right now,” he says.
It is also a global race, or rather a contest, that includes China. DeepSeek came from nowhere this year to announce the DeepSeek R1 model, boasting of “powerful and intriguing reasoning behaviours” comparable with OpenAI’s best work.
Major companies looking to integrate AI into their operations have taken note. Saudi Aramco, the world’s largest oil company, uses DeepSeek’s AI technology in its main datacentre and said it was “really making a big difference” to its IT systems and was making the company more efficient.
According to Artificial Analysis, a company that ranks AI models, six of the top 20 on its leaderboard – which ranks models according to a range of metrics including intelligence, price and speed – are Chinese. The six models are developed by DeepSeek, Zhipu AI, Alibaba and MiniMax. On the leaderboard for video generation models, six of the top 10 – including the current leader, ByteDance’s Seedance – are also Chinese.
Microsoft’s president, Brad Smith, whose company has barred use of DeepSeek, told a US senate hearing in May that getting your AI model adopted globally was a key factor in determining which country wins the AI race.
“The number one factor that will define whether the US or China wins this race is whose technology is most broadly adopted in the rest of the world,” he said, adding that the lesson from Huawei and 5G was that whoever establishes leadership in a market is “difficult to supplant”.
It means that, arguments over the feasibility of superintelligent systems aside, vast amounts of money and talent are being poured into this race in the world’s two largest economies – and tech firms will keep running.
“If you look back five years ago to 2020 it was almost blasphemous to say AGI was on the horizon. It was crazy to say that. Now it seems increasingly consensus to say we are on that path,” says Rosenberg.
India’s digital economy is experiencing extraordinary growth, driven by government initiatives, private enterprise, and widespread technological adoption across users from diverse socio-economic backgrounds. Artificial intelligence (AI) is now woven into the fabric of organisational operations, shaping customer interactions, streamlining product development, and enhancing overall agility. Yet, as digitisation accelerates, the nation’s cyber risk landscape is also expanding—fuelled by the very AI innovations that are transforming business.
In a rapidly evolving threat landscape, human error remains a persistent vulnerability. A recent cybersecurity survey revealed that 65% of enterprises worldwide now consider AI-powered email phishing the most urgent risk they face. India’s rapidly growing digital user base and surging data volumes create an environment for increased risks.
Yet, there’s a strong opportunity for India to leverage its unique technical strengths to lead global conversations on secure, ethical, and inclusive digital innovation. By championing responsible AI and cybersecurity, the country can establish itself not only as a global leader but also as a trusted hub for safe digital solutions.
The case for a risk-aware, innovation-led approach
While AI is strengthening security measures with rapid anomaly detection, automated responses, and cost-efficient scalability, these same advancements are also enabling attackers to move faster and deploy increasingly sophisticated techniques to evade defences. The survey shows that 31% of organisations that experienced a breach faced another within three years, underscoring the need for ongoing, data-driven vigilance.
Globally, regulators are deliberating on ensuring greater AI accountability, frameworks with tiered risk assessments, data traceability, and demands for transparent decision-making, as seen in the EU AI Act, the National Institute of Standards and Technology’s AI Risk Management Framework in the US, and the Ministry of Electronics and Information Technology’s AI governance guidelines in India.
India’s digital policy regime is evolving with the enactment of the Digital Personal Data Protection Act and other reforms. Its globally renowned IT services sector, increasing cloud adoption, and digital solutions at population scale are use cases for nations to leapfrog in their digital transformation journey. However, there is a continued need for collaboration for consistent standards, regulatory frameworks, and legislation. This approach can empower Indian developers as they build innovative and compliant solutions with the agility to serve Indian and global markets.
Smart AI security: growing fast, staying steady
The survey highlights that more than 90% of surveyed enterprises are actively adopting secure AI solutions, underscoring the high value organisations place on AI-driven threat detection. As Indian companies expand their digital capabilities with significant investments, security operations are expected to scale efficiently. Here, AI emerges as an essential ally, streamlining security centres’ operations, accelerating response time, and continuously monitoring hybrid cloud environments for unusual patterns in real time.
Boardroom alignment and cross-sector collaboration
One encouraging trend is the increasing involvement of executive leadership in cybersecurity. More boards are forming dedicated cyber-risk subcommittees and embedding risk discussions into broader strategic conversations. In India too, this shift is gaining momentum as regulatory expectations rise and digital maturity improves.
With the lines between IT, business, and compliance blurring, collaborative governance is becoming essential. The report states that 58% of organisations view AI implementation as a shared responsibility between executive leadership, privacy, compliance, and technology teams. This model, if institutionalised across Indian industry, could ensure AI and cybersecurity decisions are inclusive, ethical, and transparent.
Moreover, public-private partnerships — especially in areas like cyber awareness, standards development, and response coordination — can play a pivotal role. The Indian Computer Emergency Response Team (CERT-In), a national nodal agency with the mission to enhance India’s cybersecurity resilience by providing proactive threat intelligence, incident response, and public awareness, has already established itself as a reliable incident response authority.
A global opportunity for India
In many ways, the current moment represents a calling to create the conditions and the infrastructure to lead securely in the digital era. By leveraging its vast resource of engineering talent, proven capabilities in scalable digital infrastructure, and a culture of economical innovation, India can not only safeguard its own digital future but also help shape global norms for ethical AI deployment. This is India’s moment to lead — not just in technology, but in trust.
This article is authored by Saugat Sindhu, Global Head – Advisory Services, Cybersecurity & Risk Services, Wipro Limited.
Disclaimer: The views expressed in this article are those of the author/authors and do not necessarily reflect the views of ET Edge Insights, its management, or its members
If passed into law, the bill would enact new trade restrictions mandating exporters obtain licenses and approval for the shipments of silicon exceeding certain performance caps [File]
| Photo Credit: REUTERS
Nvidia said on Friday the AI GAIN Act would restrict global competition for advanced chips, with similar effects on the U.S. leadership and economy as the AI Diffusion Rule, which put limits on the computing power countries could have.
Short for Guaranteeing Access and Innovation for National Artificial Intelligence Act, the GAIN AI Act was introduced as part of the National Defense Authorization Act and stipulates that AI chipmakers prioritize domestic orders for advanced processors before supplying them to foreign customers.
“We never deprive American customers in order to serve the rest of the world. In trying to solve a problem that does not exist, the proposed bill would restrict competition worldwide in any industry that uses mainstream computing chips,” an Nvidia spokesperson said.
If passed into law, the bill would enact new trade restrictions mandating exporters obtain licenses and approval for the shipments of silicon exceeding certain performance caps.
“It should be the policy of the United States and the Department of Commerce to deny licenses for the export of the most powerful AI chips, including such chips with total processing power of 4,800 or above and to restrict the export of advanced artificial intelligence chips to foreign entities so long as United States entities are waiting and unable to acquire those same chips,” the legislation reads.
The rules mirror some conditions under former U.S. President Joe Biden’s AI diffusion rule, which allocated certain levels of computing power to allies and other countries.
The AI Diffusion Rule and AI GAIN Act are attempts by Washington to prioritise American needs, ensuring domestic firms gain access to advanced chips while limiting China’s ability to obtain high-end tech amid fears that the country would use AI capabilities to supercharge its military.