Connect with us

Tools & Platforms

Bubble or not, the AI backlash is validating one critic’s warnings

Published

on


First it was the release of GPT-5 that OpenAI “totally screwed up,” according to Sam Altman. Then Altman followed that up by saying the B-word at a dinner with reporters. “When bubbles happen, smart people get overexcited about a kernel of truth,” The Verge reported on comments by the OpenAI CEO. Then it was the sweeping MIT survey that put a number on what so many people seem to be feeling: a whopping 95% of generative AI pilots at companies are failing.

A tech sell-off ensued, as rattled investors sent the value of the S&P 500 down by $1 trillion. Given the increasing dominance of that index by tech stocks that have largely transformed into AI stocks, it was a sign of nerves that the AI boom was turning into dotcom bubble 2.0. To be sure, fears about the AI trade aren’t the only factor moving markets, as evidenced by the S&P 500 snapping a five-day losing streak on Friday after Jerome Powell’s quasi-dovish comments at Jackson Hole, Wyoming, as even the hint of openness from the Fed chair toward a September rate cut set markets on a tear.

Gary Marcus has been warning of the limits of large language models (LLMs) since 2019 and warning of a potential bubble and problematic economics since 2023. His words carry a particularly distinctive weight. The cognitive scientist turned longtime AI researcher has been active in the machine learning space since 2015, when he founded Geometric Intelligence. That company was acquired by Uber in 2016, and Marcus left shortly afterward, working at other AI startups while offering vocal criticism of what he sees as dead-ends in the AI space.

Still, Marcus doesn’t see himself as a “Cassandra,” and he’s not trying to be, he told Fortune in an interview. Cassandra, a figure from Greek tragedy, was a character who uttered accurate prophecies but wasn’t believed until it was too late. “I see myself as a realist and as someone who foresaw the problems and was correct about them.”

Marcus attributes the wobble in markets to GPT-5 above all. It’s not a failure, he said, but it’s “underwhelming,” a “disappointment,” and that’s “really woken a lot of people up. You know, GPT-5 was sold, basically, as AGI, and it just isn’t,” he added, referencing artificial general intelligence, a hypothetical AI with human-like reasoning abilities. “It’s not a terrible model, it’s not like it’s bad,” he said, but “it’s not the quantum leap that a lot of people were led to expect.”

Marcus said this shouldn’t be news to anyone paying attention, as he argued in 2022 that “deep learning is hitting a wall.” To be sure, Marcus has been wondering openly on his Substack on when the generative AI bubble will deflate. He told Fortune that “crowd psychology” is definitely taking place, and he thinks every day about the John Maynard Keynes quote: “The market can stay solvent longer than you can stay rational,” or Looney Tunes’s Wile E. Coyote following Road Runner off the edge of a cliff and hanging in midair, before falling down to Earth.

“That’s what I feel like,” Marcus says. “We are off the cliff. This does not make sense. And we get some signs from the last few days that people are finally noticing.”

Building warning signs

The bubble talk began heating up in July, when Apollo Global Management’s chief economist, Torsten Slok, widely read and influential on Wall Street, issued a striking calculation while falling short of declaring a bubble. “The difference between the IT bubble in the 1990s and the AI bubble today is that the top 10 companies in the S&P 500 today are more overvalued than they were in the 1990s,” he wrote, warning that the forward P/E ratios and staggering market capitalizations of companies such as Nvidia, Microsoft, Apple, and Meta had “become detached from their earnings.”

In the weeks since, the disappointment of GPT-5 was an important development, but not the only one. Another warning sign is the massive amount of spending on data centers to support all the theoretical future demand for AI use. Slok has tackled this subject as well, finding that data center investments’ contribution to GDP growth has been the same as consumer spending over the first half of 2025, which is notable since consumer spending makes up 70% of GDP. (The Wall Street Journal‘s Christopher Mims had offered the calculation weeks earlier.) Finally, on August 19, former Google CEO Eric Schmidt co-authored a widely discussed New York Times op-ed on August 19, arguing that “it is uncertain how soon artificial general intelligence can be achieved.”

This is a significant about-face, according to political scientist Henry Farrell, who argued in the Financial Times in January that Schmidt was a key voice shaping the “New Washington Consensus,” predicated in part on AGI being “right around the corner.” On his Substack, Farrell said Schmidt’s op-ed shows that his prior set of assumptions are “visibly crumbling away,” while caveating that he had been relying on informal conversations with people he knew in the intersection of D.C. foreign policy and tech policy. Farrell’s title for that post: “The twilight of tech unilateralism.” He concluded: “If the AGI bet is a bad one, then much of the rationale for this consensus falls apart. And that is the conclusion that Eric Schmidt seems to be coming to.”

Finally, the vibe is shifting in the summer of 2025 into a mounting AI backlash. Darrell West warned in Brookings in May that the tide of both public and scientific opinion would soon turn against AI’s masters of the universe. Soon after, Fast Company predicted the summer would be full of “AI slop.” By early August, Axios had identified the slang “clunker” being applied widely to AI mishaps, particularly in customer service gone awry.

History says: short-term pain, long-term gain

John Thornhill of the Financial Times offered some perspective on the bubble question, advising readers to brace themselves for a crash, but to prepare for a future “golden age” of AI nonetheless. He highlights the data center buildout—a staggering $750 billion investment from Big Tech over 2024 and 2025, and part of a global rollout projected to hit $3 trillion by 2029. Thornhill turns to financial historians for some comfort and some perspective. Over and over, it shows that this type of frenzied investment typically triggers bubbles, dramatic crashes, and creative destruction—but that eventually durable value is realized.

He notes that Carlota Perez documented this pattern in Technological Revolutions and Financial Capital: The Dynamics of Bubbles and Golden Ages. She identified AI as the fifth technological revolution to follow the pattern begun in the late 18th century, as a result of which the modern economy now has railroad infrastructure and personal computers, among other things. Each one had a bubble and a crash at some point. Thornhill didn’t cite him in this particular column, but Edward Chancellor documented similar patterns in his classic Devil Take The Hindmost, a book notable not just for its discussions of bubbles but for predicting the dotcom bubble before it happened. 

Owen Lamont of Acadian Asset Management cited Chancellor in November 2024, when he argued that a key bubble moment had been passed: an unusually large number of market participants saying that prices are too high, but insisting that they’re likely to rise further.

Wall Street banks are largely not calling for a bubble. Morgan Stanley released a note recently seeing huge efficiencies ahead for companies as a result of AI: $920 billion per year for the S&P 500. UBS, for its part, concurred with the caution flagged in the news-making MIT research. It warned investors to expect a period of “capex indigestion” accompanying the data center buildout, but it also maintained that AI adoption is expanding far beyond expectations, citing growing monetization from OpenAI’s ChatGPT, Alphabet’s Gemini, and AI-powered CRM systems.

Bank of America Research wrote a note in early August, before the launch of GPT-5, seeing AI as part of a worker productivity “sea change” that will drive an ongoing “innovation premium” for S&P 500 firms. Head of U.S. Equity Strategy Savita Subramanian essentially argued that the inflation wave of the 2020s taught companies to do more with less, to turn people into processes, and that AI will turbo-charge this. “I don’t think it’s necessarily a bubble in the S&P 500,” she told Fortune in an interview, before adding, “I think there are other areas where it’s becoming a little bit bubble-like.” 

Subramanian mentioned smaller companies and potentially private lending as areas “that potentially have re-rated too aggressively.” She’s also concerned about the risk of companies diving into data centers too such a great extent, noting that this represents a shift back toward an asset-heavier approach, instead of the asset-light approach that increasingly distinguishes top performance in the U.S. economy.

“I mean, this is new,” she said. “Tech used to be very asset-light and just spent money on R&D and innovation, and now they’re spending money to build out these data centers,” adding that she sees it as potentially marking the end of their asset-light, high-margin existence and basically transforming them into “very asset-intensive and more manufacturing-like than they used to be.” From her perspective, that warrants a lower multiple in the stock market. When asked if that is tantamount to a bubble, if not a correction, she said “it’s starting to happen in places,” and she agrees with the comparison to the railroad boom.

The math and the ghost in the machine

Gary Marcus also cited the fundamentals of math as a reason that he’s concerned, with nearly 500 AI unicorns being valued at $2.7 trillion. “That just doesn’t make sense relative to how much revenue is coming [in],” he said. Marcus cited OpenAI reporting $1 billion in revenue in July, but still not being profitable. Speculating, he extrapolated that to OpenAI having roughly half the AI market, and offered a rough calculation that it means about $25 billion a year of revenue for the sector, “which is not nothing, but it costs a lot of money to do this, and there’s trillions of dollars [invested].”

So if Marcus is correct, why haven’t people been listening to him for years? He said he’s been warning people about this for years, too, calling it the “gullibility gap” in his 2019 book Rebooting AI and arguing in The New Yorker in 2012 that deep learning was a ladder that wouldn’t reach the moon. For the first 25 years of his career, Marcus trained and practiced as a cognitive scientist, and learned about the “anthropomorphization people do. … [they] look at these machines and make the mistake of attributing to them an intelligence that is not really there, a humanness that is not really there, and they wind up using them as a companion, and they wind up thinking that they’re closer to solving these problems than they actually are.” He said he thinks the bubble inflating to its current extent is in large part because of the human impulse to project ourselves onto things, something a cognitive scientist is trained not to do.

These machines might seem like they’re human, but “they don’t actually work like you,” Marcus said, adding, “this entire market has been based on people not understanding that, imagining that scaling was going to solve all of this, because they don’t really understand the problem. I mean, it’s almost tragic.”

Subramanian, for her part, said she thinks “people love this AI technology because it feels like sorcery. It feels a little magical and mystical … the truth is it hasn’t really changed the world that much yet, but I don’t think it’s something to be dismissed.” She’s also become really taken with it herself. “I’m already using ChatGPT more than my kids are. I mean, it’s kind of interesting to see this. I use ChatGPT for everything now.”



Source link

Tools & Platforms

Innovate, don’t just import AI, tech industry group chief tells Australia | MLex

Published

on


By Saloni Sinha ( September 16, 2025, 03:08 GMT | Insight) — Australia won’t succeed globally if it only imports technology instead of innovating, a leading tech founder has warned, reiterating the need to cut red tape and attract capital to build homegrown artificial intelligence companies. Scott Farquhar, the founder of Australian tech company Atlassian and the chair of an industry group representing Google, Microsoft and OpenAI, said the tech sector was looking forward to working with the government on all the aspects that are needed to “make sure Australia is the best place to create AI.”Australia won’t succeed globally if it only imports technology instead of innovating, a leading tech founder has warned, reiterating the need to cut red tape and attract capital to build homegrown artificial intelligence companies….

Prepare for tomorrow’s regulatory change, today

MLex identifies risk to business wherever it emerges, with specialist reporters across the globe providing exclusive news and deep-dive analysis on the proposals, probes, enforcement actions and rulings that matter to your organization and clients, now and in the longer term.

Know what others in the room don’t, with features including:

  • Daily newsletters for Antitrust, M&A, Trade, Data Privacy & Security, Technology, AI and more
  • Custom alerts on specific filters including geographies, industries, topics and companies to suit your practice needs
  • Predictive analysis from expert journalists across North America, the UK and Europe, Latin America and Asia-Pacific
  • Curated case files bringing together news, analysis and source documents in a single timeline

Experience MLex today with a 14-day free trial.



Source link

Continue Reading

Tools & Platforms

September 16 Illuminates Our Path To Prosocial AI

Published

on


Here’s a puzzle to delight: Three unrelated historical events converge on a single date, creating a lens through which we might glimpse humanity’s future with artificial intelligence.

September 16, 2025 marks the second observance of the International Day of Science, Technology and Innovation for the South; proclaimed by UN Resolution A/RES/78/259 following the Havana Declaration. Simultaneously, Malaysia celebrates its 62nd year of federation. And the day also commemorates that quietly, the ozone layer continues its recovery, monitored by satellites that confirm what the Montreal Protocol set in motion decades ago.

Each celebration operates at a different scale of human organization. The Global South initiative speaks to billions of individuals whose innovations have been marginalized by traditional power structures. Malaysia’s story illuminates how diverse communities can federate while preserving distinct identities. The ozone layer’s healing demonstrates that entire nations can coordinate to address planetary threats.

Individual ingenuity. Community resilience. National cooperation. Planetary stewardship.

What emerges from this convergence is a systems map for how AI might finally serve regenerative rather than extractive purposes. If we choose to read the pattern correctly.

When Individual Innovation Scales Wisely

Ponder what happens when we trace innovation from the ground up. A farmer in rural Bangladesh notices that traditional flood-prediction methods, passed down through generations, align eerily well with satellite data patterns. She partners with a local tech collective to create an early warning system that combines indigenous knowledge with machine learning. The system doesn’t replace community wisdom; it amplifies it, translating ancestral observations into actionable insights that help neighboring villages prepare for increasingly volatile weather.

This is more interesting than Silicon Valley’s make and break tales: AI that emerges from and serves the communities where it’s deployed. The farmer’s innovation exemplifies prosocial AI; systems that are tailored, trained, tested and targeted to bring out the best in and for people and planet; technology that is designed to strengthen rather than extract from social fabric.

The distinction matters because it reveals two fundamentally different approaches to technological development. Extractive AI optimizes for narrow metrics; profit, efficiency, scale; often at the expense of community cohesion and ecological health. Prosocial AI asks different questions: How can these tools help people bring out their best selves? How can they strengthen local knowledge rather than displacing it? How can they serve regeneration rather than depletion?

What made M-Pesa, Kenya’s mobile money revolution transformative wasn’t sophistication; early versions were remarkably simple, but its deep understanding of how people actually lived. The technology succeeded because it amplified human connection rather than replacing it. Rural farmers could receive payments, urban workers could send money home and millions joined the formal economy for the first time. The innovation worked because it started with individual needs and scaled through community networks.

A Federation Model For Artificial Intelligence

Malaysia’s formation on September 16, 1963 offers an intriguing metaphor for how intelligence – artificial and natural, might organize differently. The federation brought together states with distinct strengths: Malaya’s urban sophistication, Sabah and Sarawak’s natural resources, Singapore’s commercial energy (though Singapore would later choose independence). Rather than homogenizing these differences, the federation’s genius lay in creating structures that allowed diversity to generate collective capability.

What if AI systems operated more like federations than empires? Instead of centralizing all computational power and decision-making in distant data centers, envision AI architectures that preserve and amplify local knowledge while enabling beneficial exchange across networks.

Malaysia’s current RM169.2 billion commitment to AI development by 2030 positions it to demonstrate this approach. The nation sits at a unique confluence: ancient trade routes that have always facilitated knowledge exchange, extraordinary biodiversity that holds solutions to countless challenges and rapidly developing technological infrastructure. It could pioneer federated intelligence with AI systems that honor cultural diversity while solving shared problems, because they are driven by regenerative intent. ProSocial AI in practice.

Imagine AI systems that learn from Penan forest management in Sarawak, flood resilience strategies in Kelantan, urban heat reduction techniques in Kuala Lumpur, and marine restoration practices in Sabah. Instead of flattening this knowledge into generic training data, federated AI would preserve the contextual richness that makes each approach effective while facilitating cross-pollination of insights.

Malaysia’s Digital Economy Blueprint emphasizes inclusive growth and sustainable development. The challenge is ensuring that AI development follows these principles rather than defaulting to extractive models that concentrate benefits in already-wealthy regions.

Planetary Healing As Governance Template

When scientists discovered the expanding hole over Antarctica in the 1980s, the world faced a choice between short-term economic interests and long-term planetary survival. The Montreal Protocol, signed in 1987, chose survival. It remains the only UN environmental treaty to achieve universal ratification.

The ozone layer is now healing. NASA satellite data shows the hole shrinking, with scientists predicting full recovery by 2066. This success story offers a template for how we might approach AI governance in the age of climate crisis.

It is a useful reminder that international cooperation can actually work (which is good to remember in times where the United Nations faces a renewed wave of criticism and funding shortages). It also shows that successful environmental action requires both technological innovation and deliberate human restraint. The chemicals depleting ozone weren’t inherently malicious; they served useful purposes in refrigeration and industrial processes. But their unintended consequences threatened the atmospheric system that makes complex life possible on Earth.

AI presents similar dynamics. These systems offer extraordinary capabilities for climate modeling, resource optimization and ecological monitoring. But without intentional design for regenerative outcomes, they will accelerate the very problems they’re meant to solve; through massive energy consumption and job displacement that undermines social cohesion, driven by an optimization for narrow metrics that miss systemic effects.

The Montreal Protocol succeeded because it established clear boundaries before damage became irreversible, created accountability mechanisms that applied to all parties, and provided pathways for innovation within those constraints. Our approach to AI governance needs similar elements.

The Choice Architecture Of Prosocial AI

Human agency is magic. AI systems don’t choose their own purposes; we do. Every training objective, every deployment decision, every business model represents a choice about what kind of future we’re building. The question isn’t whether AI will be powerful; it already is. The question is whether we’ll use that power to regenerate or to extract.

Choice architecture might reshape familiar AI applications. Instead of recommendation algorithms optimized for engagement time, picture systems designed to help people develop deeper interests and stronger relationships. Instead of predictive policing that reinforces existing biases, envision AI that helps communities understand and address root causes of social problems. Instead of agricultural AI that maximizes yield through chemical inputs, picture systems that optimize simultaneously for soil health, biodiversity and farmer wellbeing.

Research on AI’s potential social impact points to emerging realities in contexts where communities maintain agency over technological development. The difference lies in who controls the design process and whose values get embedded in the systems.

Possible Practical Pathways

The convergence of September 16’s three celebrations suggests specific directions for this hybrid work, operating at each scale of human organization:

Individual Level – From Global South Innovation

Prioritize locally-owned innovation that solves pressing community needs. This means supporting AI research and development that emerges from the places where solutions will be implemented, rather than imposing external fixes. It means designing systems that can function effectively in resource-constrained environments and that strengthen rather than replace local expertise.

Community Level – From Malaysia’s Federation

Embrace diversity as a source of systemic resilience. This translates to AI architectures that preserve cultural and biological diversity rather than homogenizing them. It means creating governance structures that balance coordination benefits with local autonomy, ensuring that AI development benefits are distributed across regions and communities rather than concentrated in a few tech capitals.

National Level – From Ozone Protection

Establish clear boundaries and accountability mechanisms before problems become irreversible. This requires precautionary principles in AI deployment, international cooperation on standards and governance, and willingness to constrain profitable applications when they threaten larger systems.

Planetary Level – From Systems Thinking

Recognize that individual, community and national interventions must align to address challenges that transcend borders; climate change, biodiversity loss, social inequality. AI governance must account for these interconnections rather than optimizing for any single level. Planetary health involves everyone, everywhere.

The Regenerative Imperative

What ties these threads together is a vision of technology as a regenerative force; systems that heal rather than harm, that strengthen rather than extract, that enhance human capabilities rather than replace them. This isn’t about slowing progress or returning to pre-digital ways of life. It’s about directing our technological capabilities toward outcomes that serve life and living within an organically evolving kaleidoscope. Each of us is part of that kaleidoscope – and it is part of us.

The climate crisis makes choices urgent. The window to restructure human systems to operate within planetary boundaries is shrinking. As we are navigating this hybrid tipping zone AI could accelerate a positive transition; through smarter energy grids, precision agriculture that reduces chemical inputs, transportation systems that minimize waste and circular economy platforms that keep materials in productive use.

That requires humans to choose regeneration over extraction. It means deliberate design for people and planet, not pure profit. Intelligence; artificial and natural; serves its highest purpose when it helps life flourish.

September 16’s Systems Map

As we mark these three celebrations together, September 16, 2025 offers an invitation to imagine the hybrid future and to frame A. Not as an inevitable force reshaping society according to technological imperatives, but as a tool we can consciously direct toward healing our communities and our planet.

The Global South’s innovation ecosystem shows us that technology can emerge from and serve local needs. Malaysia’s federation demonstrates that diversity strengthens systems more than uniformity. The healing ozone layer proves that humanity can act collectively when we recognize shared stakes and clear pathways forward.

Individual ingenuity scaling through community networks. Diverse capabilities federating while preserving local identity. National coordination addressing planetary challenges. This is the systems map that September 16 offers for prosocial AI.

The question is whether we’ll apply these lessons to the most powerful technology humans have ever created. Whether we’ll choose AI that brings out our best impulses, or allow narrow optimization to undermine the very systems that support complex life on Earth.

The choice remains ours to make, for now..



Source link

Continue Reading

Tools & Platforms

Workers ‘larping’ by pretending to use AI | Information Age

Published

on


Woman working at a computer.

Workers are feeling pressure to use AI at work. Photo: Shutterstock

Many employees are “larping” at work by pretending to use artificial intelligence due to pressure to harness the technology, according to social scientist Nigel Dalton.

Delivering the keynote speech at RMIT Online’s Future Skills Fest, Dalton, of tech consultancy Thoughtworks, described the difficult state of affairs for Australian workers of all ages when it comes to AI.

He said it’s like going from a zoo to the jungle, and that many workers experience paralysis when it comes to new technologies.

Dalton pointed to a recent survey that found that one in six workers were pretending to use AI at work.

The survey, conducted by engineer outsourcing company Howdy.com, found that workers felt pressured to use AI in situations they were unsure about, and that three-quarters of them were expected to use the technology at work.

“AI is taking over the white-collar workspace as daily updates provide opportunities to optimise,” the report said.

“However, potential does not always lead to smooth implementation.”

‘Larping’ at work

Dalton said these workers are “larping” and not keeping pace with new technologies such as AI.

“They’ve got Gemini or CoPilot open when their boss walks up behind them, and they are larping – they are live action roleplaying,” Dalton said.

“This is interesting. What human behaviour did we incite here from the way we were scaffolding the work and the scene and the structure?”

The use of AI by companies of all shapes and sizes has accelerated in recent years, particularly since the advent of generative AI tools such as ChatGPT.

Earlier this year, Goldman Sachs became one of the largest companies to hire an AI software engineer to work alongside its human employees and complete complicated, multistep tasks.

Social scientist Nigel Dalton says that in 10 years, we’ll look back on this period and laugh. Photo: Shutterstock

Dalton likened how many workers are feeling when it comes to AI to the German chess term “zugzwang”, which means the compulsion to move even when knowing this will likely worsen your overall position.

“This is very much a good description of where we feel ourselves today and in our careers,” he said.

“If I do that, it’ll be the wrong thing; if I stand still it’ll be okay. But you can’t stand still. That’s why you’re feeling the dissonance in your head. But it will likely lead you to doing nothing, which is probably the worst scenario.

“We’re anchored in this ridiculous period that in 10 years we will all look back on and laugh.”

From a zoo to a jungle

With the growing usage of AI across all operations, businesses have become increasingly challenging to navigate for employees at all levels, particularly those who are yet to harness the technology fully.

Dalton said this was like the workplace going from a zoo to a jungle.

“We all used to work in a zoo – a metaphorically complicated process,” he said.

“At a zoo you can take photos of wild animals but the path is concrete, there are timetables and it’s all very safe.

“In a zoo, every animal stays in their cage. That is how work used to be – there weren’t any looming threats of stuff coming out of the forest.

“Now we’re on a work safari, a career safari. There are no paths, no signposts, no timetables.

“The animals are hiding in plain sight and collaborating, and may come from anywhere.

“To navigate the jungle you need a new mindset, and it involves being comfortable with getting lost, with what it feels like to go backwards for a time.”

According to Dalton, there are four key factors shaping the future of work: the climate crisis, ageing citizens, disruptive technology and declining social equity.

“It’s not just these things individually, it’s them weaving in together,” he said.

“It’s in these unlikely places that I believe businesses will be built, where the opportunities lie.

“It’s hard to navigate now, but there are opportunities amidst all of this chaos, as there always have been in history.”





Source link

Continue Reading

Trending