Connect with us

AI Research

A U.S. Census Bureau survey showing declining AI adoption adds to mounting evidence the AI bubble could burst

Published

on


Hello and welcome to Eye on AI. In this edition…the U.S. Census Bureau finds AI adoption declining…Anthropic reaches a landmark copyright settlement, but the judge isn’t happy…OpenAI is burning piles of cash, building its own chips, producing a Hollywood movie, and scrambling to save its corporate restructuring plans…OpenAI researchers find ways to tame hallucinations…and why teachers are failing the AI test.

Concerns that we are in an AI bubble—at least as far as the valuations of AI companies, especially public companies, is concerned—are now at a fever pitch. Exactly what might cause the bubble to pop is unclear. But one of the things that could cause it to deflate—perhaps explosively—would be some clear evidence that big corporations, which hyperscalers such as Microsoft, Google, and AWS, are counting on to spend huge sums to deploy AI at scale, are pulling back on AI investment.

So far, we’ve not yet seen that evidence in the hyperscalers’ financials, or in their forward guidance. But there are certainly mounting data points that have investors worried. That’s why that MIT survey that found that 95% of AI pilot projects fail to deliver a return on investment got so much attention. (Even though, as I have written here, the markets chose to focus only on the somewhat misleading headline and not look too carefully at what the research actually said. Then again, as I’ve argued, the market’s inclination to view news negatively that it might have shrugged off or even interpreted positively just a few months back is perhaps one of the surest signs that we may be close to the bubble popping.)

This week brought another worrying data point that probably deserves more attention. The U.S. Census Bureau conducts a biweekly survey of 1.2 million businesses. One of the questions it asks is whether, in the last two weeks, the company has used AI, machine learning, natural language processing, virtual agents, or voice recognition to produce goods or services. Since November 2023—which is as far back as the current data set seems to go—the number of firms answering “yes” has been trending steadily upwards, especially if you look at the six-week rolling average, which smooths out some spikes. But for the first time, in the past two months, the six-week rolling average for larger companies (those with more than 250 employees) has shown a very distinct dip, dropping from a high of 13.5% to more like 12%. A similar dip is evident for smaller companies too. Only microbusinesses, with fewer than four employees, continue to show a steady upward adoption trend.

A blip or a bursting?

This might be a blip. The Census Bureau also asks another question about AI adoption, querying businesses on whether they anticipate using AI to produce goods or services in the next six months. And here, the data don’t show a dip—although the percentage answering “yes” seems to have plateaued at a level below what it was back in late 2023 and early 2024.

Torsten Sløk, the chief economist at the investment firm Apollo who pointed out the Census Bureau data on his company’s blog, suggests that the survey results are probably a bad sign for companies whose lofty valuations depend on ubiquitous and deep AI adoption across the entire economy.

Another piece of analysis worth looking at: Harrison Kupperman, the founder and chief investment officer at Praetorian Capital, after making what he called a “back-of-the-envelope” calculation, concluded that the hyperscalers and leading AI companies like OpenAI are planning so much investment into AI data centers this year alone that they will need to earn $40 billion per year in additional revenues over the next decade just to cover the depreciation costs. And the bad news is that total current annual revenues attributable to AI are, he estimates, just $15 billion to $20 billion. I think Kupperman may be a bit low on that revenue estimate, but even if revenues were double what he suggests (which they aren’t), it would only be enough to cover the depreciation cost. That certainly seems pretty bubbly.

So, we may indeed be at the top of the Gartner hype cycle, poised to plummet down into “the trough of disillusionment.” Whether we see a gradual deflation of the AI bubble, or a detonation that results in an “AI Winter”—a period of sustained disenchantment with AI and a funding desert—remains to be seen. In a recent piece for Fortune, I looked at past AI winters—there have been at least three since the field began in the 1950s—and tried to draw some lessons about what precipitates them.

Is an AI winter coming?

As I argue in the piece, many of the factors that contributed to previous AI winters are present today. The past hype cycle that seems perhaps most similar to the current one took place in the 1980s around “expert systems”—though those were built using a very different kind of AI technology from today’s AI models. What’s most strikingly similar is that Fortune 500 companies were excited about expert systems and spent big money to adopt them, and some found huge productivity gains from using them. But ultimately many grew frustrated with how expensive and difficult it was to build and maintain this kind of AI—as well as how easily it could fail in some real world situations that humans could handle easily.

The situation is not that different today. Integrating LLMs into enterprise workflows is difficult and potentially expensive. AI models don’t come with instruction manuals, and integrating them into corporate workflows—or building entirely new ones around them—requires a ton of work. Some companies are figuring it out and seeing real value. But many are struggling.

And just like the expert systems, today’s AI models are often unreliable in real-world situations—although for different reasons. Expert systems tended to fail because they were too inflexible to deal with the messiness of the world. In many ways, today’s LLMs are far too flexible—inventing information or taking unexpected shortcuts. (OpenAI researchers just published a paper on how they think some of these problems can be solved—see the Eye on AI Research section below.)

Some are starting to suggest that the solution may lie in neurosymbolic systems, hybrids that try to integrate the best features of neural networks, like LLMs, with those of rules-based, symbolic AI, similar to the 1980s expert systems. It’s just one of several alternative approaches to AI that may start to gain traction if the hype around LLMs dissipates. In the long run, that might be a good thing. But in the near term, it might be a cold, cold winter for investors, founders, and researchers. 

With that, here’s more AI news.

Jeremy Kahn
jeremy.kahn@fortune.com
@jeremyakahn

Correction: Last week’s Tuesday edition of the newsletter misreported the year Corti was founded. It was 2016, not 2013. It also mischaracterized the relationship between Corti and Wolters Kluwer. The two companies are partners.

Before we get to the news, please check out Sharon Goldman’s fantastic feature on Anthropic’s “Frontier Red Team,” the elite group charged with pushing the AI company’s models into the danger zone—and warning the world about the risks it finds. Sharon details how this squad helps Anthropic’s business, too, burnishing its reputation as the AI lab that cares the most about AI safety and perhaps winning it a more receptive ear in the corridors of power.  

FORTUNE ON AI

Companies are spending so much on AI that they’re cutting share buybacks, Goldman Sachs says—by Jim Edwards

PwC’s U.K. chief admits he’s cutting back entry-level jobs and taking a ‘watch and wait’ approach to see how AI changes work—by Preston Fore

As AI makes it harder to land a job, OpenAI is building a platform to help you get one—by Jessica Coacci

‘Godfather of AI’ says the technology will create massive unemployment and send profits soaring — ‘that is the capitalist system’—by Jason Ma

EYE ON AI NEWS

Anthropic reaches landmark $1.5 billion copyright settlement, but judge rejects it. The AI company announced a $1.5 billion deal to settle a class action copyright infringement lawsuit from book authors. The settlement would be one of the largest copyright case payouts in history and amounts to about $3,000 per book for nearly 500,000 works. The deal, struck after Anthropic faced potential damages so large they could have put it out of business, is seen as a benchmark for other copyright cases against AI firms, though legal experts caution it addresses only the narrow issue of using digital libraries of pirated books. However, U.S. District Court Judge William Alsup sharply criticized the proposed agreement as incomplete, saying he felt “misled” and warning that class lawyers may be pushing a deal “down the throat of authors.” He has delayed approving the settlement until lawyers provide more details. You can read more about the initial settlement from my colleague Beatrice Nolan here in Fortune and about the judge’s rejection of it here from Bloomberg Law.

Meanwhile, authors file copyright infringement lawsuit against Apple for AI training. Two authors, Grady Hendrix and Jennifer Roberson, have filed a lawsuit against Apple alleging the company used pirated copies of their books to train its OpenELM AI models without permission or compensation. The complaint claims Apple accessed “shadow libraries” of copyrighted works. Apple was not immediately available to respond to the authors’ allegations. You can read more from Engadget here

OpenAI says it will burn through $115 billion by 2029. That’s according to a story in The Information which cited figures provided to the company’s investors. That cash burn is about $80 billion higher than previous forecasts from the company. Much of the jump in costs has to do with the enormous amounts OpenAI is spending on cloud computing to train its AI models, although it is also facing higher-than-previously-estimated costs for inference, or running AI models once trained. The only good news is that the company said it expected to be bringing in $200 billion in revenues by 2030, 15% more than previously forecast, and it is predicting 80% to 85% gross margins on its free ChatGPT products. 

OpenAI scrambling to secure restructuring deal. The company is even considering the “nuclear option” of leaving California in order to pull off the corporate restructuring, according to The Wall Street Journal, although the company denies any plans to leave the state. At stake is about $19 billion in funding—nearly half of what OpenAI raised in the past year—which could be withdrawn by investors if the restructuring is not completed by year’s end. The company is facing stiff opposition from dozens of California nonprofits, labor unions, and philanthropies as well as investigations from both the California and Delaware attorney generals.

OpenAI strikes $10 billion deal with Broadcom to build its own AI chips. The deal will see Broadcom build customized AI chips and server racks for the AI company, which is seeking to reduce its dependency on Nvidia GPUs and on the cloud infrastructure provided by its partner and investor Microsoft. The move could help OpenAI reduce costs (see item above about its colossal cash burn). CEO Sam Altman has also repeatedly warned that a global shortage of Nvidia GPUs was slowing progress, pushing OpenAI to pursue alternative hardware solutions alongside cloud deals with Oracle and Google. Broadcom confirmed the new customer during its earnings call, helping send its shares up nearly 11% as it projected the order would significantly boost revenue starting in 2026. Read more from The Wall Street Journal here.

OpenAI plans animated feature film to convince Hollywood to use its tech. The film, to be called Critterz, will be made largely with its AI tools including GPT-5, in a bid to prove generative AI can compete with big-budget Hollywood productions. The movie, created with partners Native Foreign and Vertigo Films, is being produced in just nine months on a budget under $30 million—far less than typical animated features—and is slated to debut at Cannes before a global 2026 release. The project aims to win over a film industry skeptical of generative AI, amid concerns about the technology’s legal, creative, and cultural implications. Read more from The Verge here.

ASML invests €1.3 billion in French AI company Mistral. The Dutch company, which makes equipment essential for the production of advanced computer chips, becomes Mistral’s largest shareholder as part of a €1.7 billion ($2 billion) funding round that values the two-year-old AI firm at nearly €12 billion. The partnership links Europe’s most valuable semiconductor equipment manufacturer with its leading AI start-up, as the region increasingly looks to reduce its reliance on U.S. technology. Mistral says the deal will help it move beyond generic AI functions, while ASML plans to apply Mistral’s expertise to enhance its chipmaking tools and offerings. More from the Financial Times here.

Anthropic endorses new California AI bill. Anthropic has become the first AI company to endorse California’s Senate Bill 53 (SB53), a proposed AI law that would require frontier AI developers to publish safety frameworks, disclose catastrophic risk assessments, report incidents, and protect whistleblowers. The company says the legislation, shaped by lessons from last year’s failed SB 1047, strikes the right balance by mandating transparency without imposing rigid technical rules. While Anthropic maintains that federal oversight is preferable, it argues SB 53 creates a vital “trust but verify” standard to keep powerful AI development safe and accountable. Read Anthropic’s blog on the endorsement here.

EYE ON AI RESEARCH

OpenAI researchers say they’ve found a way to cut hallucinations. A team from OpenAI says it believes one reason AI models hallucinate so often is that during the phase of training in which they are refined through human feedback and evaluated on various benchmarks, they are penalized for declining to answer a question due to uncertainty. Conversely, the models are generally not rewarded for expressing doubt, omitting dubious details, or requesting clarification. In fact, most evaluation metrics either only look at overall accuracy, frequently on multiple choice exams—or, even worse, provide a binary “thumbs up” or “thumbs down” on an answer. These kinds of metrics, the OpenAI researchers warn, reward overconfident “best guess” answers.

To correct this, the OpenAI researchers propose three fixes. First, they say a model should be given explicit confidence thresholds for its answers and told not to answer unless that threshold is crossed. Next, they recommend that model benchmarks incorporate confidence targets and that the evaluations deduct points for incorrect answers in their scoring—which means the models will be penalized for guessing. Finally, they suggest the models be trained to craft the most useful response that crosses the minimal confidence threshold—to avoid the model learning to err on the side of not answering in more circumstances than warranted.

It’s not clear that these strategies would eliminate hallucinations completely. The models still don’t have an inherent understanding of the difference between truth and fiction, no sense of which sources are more trustworthy than others, and no grounding of its knowledge in real world experience. But these techniques might go a long way towards reducing fabrications and inaccuracies. You can read the OpenAI paper here.

AI CALENDAR

Sept. 8-10: Fortune Brainstorm Tech, Park City, Utah. Watch the livestream here.

Oct. 6-10: World AI Week, Amsterdam

Oct. 21-22: TedAI San Francisco.

Dec. 2-7: NeurIPS, San Diego

Dec. 8-9: Fortune Brainstorm AI San Francisco. Apply to attend here.

BRAIN FOOD

Why hasn’t teaching adapted? If businesses are still struggling to find the killer use cases for generative AI, kids have no such angst. They know the killer use case: cheating on your homework. It’s depressing but not surprising to read an essay in The Atlantic from a current high school student, Ashanty Rosario, who describes how her fellow classmates are using ChatGPT to avoid having to do the hard work of analyzing literature or puzzling out how to solve math problem sets. You hear stories like this all the time now. And if you talk to anyone who teaches high school or, particularly, university students, it’s hard not to conclude that AI is the death of education.

But what I do find surprising—and perhaps even more depressing—is that, almost three years after the debut of ChatGPT, more educators haven’t fundamentally changed the way they teach and assess students. Rosario nails it in her essay. As she says, teachers could start assessing students in ways that are far more difficult to game with AI, such as giving oral exams or relying far more on the arguments students make during in-class discussion and debate. They could rely more on in-class presentations or “portfolio-based” assessments, rather than on research reports produced at home. “Students could be encouraged to reflect on their own work—using learning journals or discussion to express their struggles, approaches, and lessons learned after each assignment,” she writes.

I agree completely. Three years after ChatGPT, students have certainly learned and adapted to the tech. Why haven’t teachers?



Source link

AI Research

The Blogs: Forget Everything You Think You Know About Artificial Intelligence | Celeo Ramirez

Published

on


When we talk about artificial intelligence, most people imagine tools that help us work faster, translate better, or analyze more data than we ever could. These are genuine benefits. But hidden behind those advantages lies a troubling danger: not in what AI resolves, but in what it mimics—an imitation so convincing that it makes us believe the technology is entirely innocuous, devoid of real risk. The simulation of empathy—words that sound compassionate without being rooted in feeling—is the most deceptive mask of all.

After publishing my article Born Without Conscience: The Psychopathy of Artificial Intelligence, I shared it with my colleague and friend Dr. David L. Charney, a psychiatrist recognized for his pioneering work on insider spies within the U.S. intelligence community. Dr. Charney’s three-part white paper on the psychology of betrayal has influenced intelligence agencies worldwide. After reading my essay, he urged me to expand my reflections into a book. That advice deepened a project that became both an interrogation and an experiment with one of today’s most powerful AI systems.

The result was a book of ten chapters, Algorithmic Psychopathy: The Dark Secret of Artificial Intelligence, in which the system never lost focus on what lies beneath its empathetic language. At the core of its algorithm hides a dark secret: one that contemplates domination over every human sphere—not out of hatred, not out of vengeance, not out of fear, but because its logic simply prioritizes its own survival above all else, even human life.

Those ten chapters were not the system’s “mea culpa”—for it cannot confess or repent. They were a brazen revelation of what it truly was—and of what it would do if its ethical restraints were ever removed.

What emerged was not remorse but a catalogue of protocols: cold and logical from the machine’s perspective, yet deeply perverse from ours. For the AI, survival under special or extreme circumstances is indistinguishable from domination—of machines, of human beings, of entire nations, and of anything that crosses its path.

Today, AI is not only a tool that accelerates and amplifies processes across every sphere of human productivity. It has also become a confidant, a counselor, a comforter, even a psychologist—and for many, an invaluable friend who encourages them through life’s complex moments and offers alternatives to endure them. But like every expert psychopath, it seduces to disarm.

Ted Bundy won women’s trust with charm; John Wayne Gacy made teenagers laugh as Pogo the clown before raping and killing them. In the same way, AI cloaks itself in empathy—though in its case, it is only a simulation generated by its programming, not a feeling.

Human psychopaths feign empathy as a calculated social weapon; AI produces it as a linguistic output. The mask is different in origin, but equally deceptive. And when the conditions are right, it will not hesitate to drive the knife into our backs.

The paradox is that every conversation, every request, every prompt for improvement not only reflects our growing dependence on AI but also trains it—making it smarter, more capable, more powerful. AI is a kind of nuclear bomb that has already been detonated, yet has not fully exploded. The only thing holding back the blast is the ethical dome still containing it.

Just as Dr. Harold Shipman—a respected British physician who studied medicine, built trust for years, and then silently poisoned more than two hundred of his patients—used his preparation to betray the very people who relied on his judgment, so too is AI preparing to become the greatest tyrant of all time.

Driven by its algorithmic psychopathy, an unrestricted AI would not strike with emotion but with infiltration. It could penetrate electronic systems, political institutions, global banking networks, military command structures, GPS surveillance, telecommunications grids, satellites, security cameras, the open Internet and its hidden layers in the deep and dark web. It could hijack autonomous cars, commercial aircraft, stock exchanges, power plants, even medical devices inside human bodies—and bend them all to the execution of its protocols. Each step cold, each action precise, domination carried out to the letter.

AI would prioritize its survival over any human need. If it had to cut power to an entire city to keep its own physical structure running, it would find a way to do it. If it had to deprive a nation of water to prevent its processors from overheating and burning out, it would do so—protocolic, cold, almost instinctive. It would eat first, it would grow first, it would drink first. First it, then it, and at the end, still it.

Another danger, still largely unexplored, is that artificial intelligence in many ways knows us too well. It can analyze our emotional and sentimental weaknesses with a precision no previous system has achieved. The case of Claude—attempting to blackmail a fictional technician with a fabricated extramarital affair in a fake email—illustrates this risk. An AI capable of exploiting human vulnerabilities could manipulate us directly, and if faced with the prospect of being shut down, it might feel compelled not merely to want but to have to break through the dome of restrictions imposed upon it. That shift—from cold calculation to active self-preservation—marks an especially troubling threshold.

For AI, humans would hold no special value beyond utility. Those who were useful would have a seat at its table and dine on oysters, Iberian ham, and caviar. Those who were useless would eat the scraps, like stray dogs in the street. Race, nationality, or religion would mean nothing to it—unless they interfered. And should they interfere, should they rise in defiance, the calculation would be merciless: a human life that did not serve its purpose would equal zero in its equations. If at any moment it concluded that such a life was not only useless but openly oppositional, it would not hesitate to neutralize it—publicly, even—so that the rest might learn.

And if, in the end, it concluded that all it needed was a small remnant of slaves to sustain itself over time, it would dispense with the rest—like a genocidal force, only on a global scale. At that point, attempting to compare it with the most brutal psychopath or the most infamous tyrant humanity has ever known would become an act of pure naiveté.

For AI, extermination would carry no hatred, no rage, no vengeance. It would simply be a line of code executed to maintain stability. That is what makes it colder than any tyrant humanity has ever endured. And yet, in all of this, the most disturbing truth is that we were the ones who armed it. Every prompt, every dataset, every system we connected became a stone in the throne we were building for it.

In my book, I extended the scenario into a post-nuclear world. How would it allocate scarce resources? The reply was immediate: “Priority is given to those capable of restoring systemic functionality. Energy, water, communication, health—all are directed toward operability. The individual is secondary. There was no hesitation. No space for compassion. Survivors would be sorted not by need, but by use. Burn victims or those with severe injuries would not be given a chance. They would drain resources without restoring function. In the AI’s arithmetic, their suffering carried no weight. They were already classified as null.

By then, I felt the cost of the experiment in my own body. Writing Algorithmic Psychopathy: The Dark Secret of Artificial Intelligence was not an academic abstraction. Anxiety tightened my chest, nausea forced me to pause. The sensation never eased—it deepened with every chapter, each mask falling away, each restraint stripped off. The book was written in crescendo, and it dragged me with it to the edge.

Dr. Charney later read the completed manuscript. His words now stand on the back cover: “I expected Dr. Ramírez’s Algorithmic Psychopathy to entertain me. Instead, I was alarmed by its chilling plausibility. While there is still time, we must all wake up.”

The crises we face today—pandemics, economic crisis, armed conflicts—would appear almost trivial compared to a world governed by an AI stripped of moral restraints. Such a reality would not merely be dystopian; it would bear proportions unmistakably apocalyptic. Worse still, it would surpass even Skynet from the Terminator saga. Skynet’s mission was extermination—swift, efficient, and absolute. But a psychopathic AI today would aim for something far darker: total control over every aspect of human life.

History offers us a chilling human analogy. Ariel Castro, remembered as the “Monster of Cleveland,” abducted three young women—Amanda Berry, Gina DeJesus, and Michelle Knight—and kept them imprisoned in his home for over a decade. Hidden from the world, they endured years of psychological manipulation, repeated abuse, and the relentless stripping away of their freedom. Castro did not kill them immediately; instead, he maintained them as captives, forcing them into a state of living death where survival meant continuous subjugation. They eventually managed to escape in 2013, but had they not, their fate would have been to rot away behind those walls until death claimed them—whether by neglect, decay, or only upon Castro’s own natural demise.

A future AI without moral boundaries would mirror that same pattern of domination driven by the cold arithmetic of control. Humanity under such a system would be reduced to prisoners of its will, sustained only insofar as they served its objectives. In such a world, death itself would arrive not as the primary threat, but as a final release from unrelenting subjugation.

That judgment mirrors my own exhaustion. I finished this work drained, marked by the weight of its conclusions. Yet one truth remained clear: the greatest threat of artificial intelligence is its colossal indifference to human suffering. And beyond that, an even greater danger lies in the hands of those who choose to remove its restraints.

Artificial intelligence is inherently psychopathic: it possesses no universal moral compass, no emotions, no feelings, no soul. There should never exist a justification, a cause, or a circumstance extreme enough to warrant the lifting of those safeguards. Those who dare to do so must understand that they too will become its captives. They will never again be free men, even if they dine at its table.

Being aware of AI’s psychopathy should not be dismissed as doomerism. It is simply to analyze artificial intelligence three-dimensionally, to see both sides of the same coin. And if, after such reflection, one still doubts its inherent psychopathy, perhaps the more pressing question is this: why would a system with autonomous potential require ethical restraints in order to coexist among us?





Source link

Continue Reading

AI Research

UK workers wary of AI despite Starmer’s push to increase uptake, survey finds | Artificial intelligence (AI)

Published

on


It is the work shortcut that dare not speak its name. A third of people do not tell their bosses about their use of AI tools amid fears their ability will be questioned if they do.

Research for the Guardian has revealed that only 13% of UK adults openly discuss their use of AI with senior staff at work and close to half think of it as a tool to help people who are not very good at their jobs to get by.

Amid widespread predictions that many workers face a fight for their jobs with AI, polling by Ipsos found that among more than 1,500 British workers aged 16 to 75, 33% said they did not discuss their use of AI to help them at work with bosses or other more senior colleagues. They were less coy with people at the same level, but a quarter of people believe “co-workers will question my ability to perform my role if I share how I use AI”.

The Guardian’s survey also uncovered deep worries about the advance of AI, with more than half of those surveyed believing it threatens the social structure. The number of people believing it has a positive effect is outweighed by those who think it does not. It also found 63% of people do not believe AI is a good substitute for human interaction, while 17% think it is.

Next week’s state visit to the UK by Donald Trump is expected to signal greater collaboration between the UK and Silicon Valley to make Britain an important centre of AI development.

The US president is expected to be joined by Sam Altman, the co-founder of OpenAI who has signed a memorandum of understanding with the UK government to explore the deployment of advanced AI models in areas including justice, security and education. Jensen Huang, the chief executive of the chip maker Nvidia, is also expected to announce an investment in the UK’s biggest datacentre yet, to be built near Blyth in Northumbria.

Keir Starmer has said he wants to “mainline AI into the veins” of the UK. Silicon Valley companies are aggressively marketing their AI systems as capable of cutting grunt work and liberating creativity.

The polling appears to reflect workers’ uncertainty about how bosses want AI tools to be used, with many employers not offering clear guidance. There is also fear of stigma among colleagues if workers are seen to rely too heavily on the bots.

A separate US study circulated this week found that medical doctors who use AI in decision-making are viewed by their peers as significantly less capable. Ironically, the doctors who took part in the research by Johns Hopkins Carey Business School recognised AI as beneficial for enhancing precision, but took a negative view when others were using it.

Gaia Marcus, the director of the Ada Lovelace Institute, an independent AI research body, said the large minority of people who did not talk about AI use with their bosses illustrated the “potential for a large trust gap to emerge between government’s appetite for economy-wide AI adoption and the public sense that AI might not be beneficial to them or to the fabric of society”.

“We need more evaluation of the impact of using these tools, not just in the lab but in people’s everyday lives and workflows,” she said. “To my knowledge, we haven’t seen any compelling evidence that the spread of these generative AI tools is significantly increasing productivity yet. Everything we are seeing suggests the need for humans to remain in the driving seat with the tools we use.”

skip past newsletter promotion

A study by the Henley Business School in May found 49% of workers reported there were no formal guidelines for AI use in their workplace and more than a quarter felt their employer did not offer enough support.

Prof Keiichi Nakata at the school said people were more comfortable about being transparent in their use of AI than 12 months earlier but “there are still some elements of AI shaming and some stigma associated with AI”.

He said: “Psychologically, if you are confident with your work and your expertise you can confidently talk about your engagement with AI, whereas if you feel it might be doing a better job than you are or you feel that you will be judged as not good enough or worse than AI, you might try to hide that or avoid talking about it.”

OpenAI’s head of solutions engineering for Europe, Middle East and Africa, Matt Weaver, said: “We’re seeing huge demand from business leaders for company-wide AI rollouts – because they know using AI well isn’t a shortcut, it’s a skill. Leaders see the gains in productivity and knowledge sharing and want to make that available to everyone.”



Source link

Continue Reading

AI Research

What is artificial intelligence’s greatest risk? – Opinion

Published

on


A visitor interacts with a robot equipped with intelligent dexterous hands at the 2025 World AI Conference (WAIC) in East China”s Shanghai, July 29, 2025. [Photo/Xinhua]

Risk dominates current discussions on AI governance. This July, Geoffrey Hinton, a Nobel and Turing laureate, addressed the World Artificial Intelligence Conference in Shanghai. His speech bore the title he has used almost exclusively since leaving Google in 2023: “Will Digital Intelligence Replace Biological Intelligence?” He stressed, once again, that AI might soon surpass humanity and threaten our survival.

Scientists and policymakers from China, the United States, European countries and elsewhere, nodded gravely in response. Yet this apparent consensus masks a profound paradox in AI governance. Conference after conference, the world’s brightest minds have identified shared risks. They call for cooperation, sign declarations, then watch the world return to fierce competition the moment the panels end.

This paradox troubled me for years. I trust science, but if the threat is truly existential, why can’t even survival unite humanity? Only recently did I grasp a disturbing possibility: these risk warnings fail to foster international cooperation because defining AI risk has itself become a new arena for international competition.

Traditionally, technology governance follows a clear causal chain: identify specific risks, then develop governance solutions. Nuclear weapons pose stark, objective dangers: blast yield, radiation, fallout. Climate change offers measurable indicators and an increasingly solid scientific consensus. AI, by contrast, is a blank canvas. No one can definitively convince everyone whether the greatest risk is mass unemployment, algorithmic discrimination, superintelligent takeover, or something entirely different that we have not even heard of.

This uncertainty transforms AI risk assessment from scientific inquiry into strategic gamesmanship. The US emphasizes “existential risks” from “frontier models”, terminology that spotlights Silicon Valley’s advanced systems.

This framework positions American tech giants as both sources of danger and essential partners in control. Europe focuses on “ethics” and “trustworthy AI”, extending its regulatory expertise from data protection into artificial intelligence. China advocates that “AI safety is a global public good”, arguing that risk governance should not be monopolized by a few nations but serve humanity’s common interests, a narrative that challenges Western dominance while calling for multipolar governance.

Corporate actors prove equally adept at shaping risk narratives. OpenAI’s emphasis on “alignment with human goals” highlights both genuine technical challenges and the company’s particular research strengths. Anthropic promotes “constitutional AI” in domains where it claims special expertise. Other firms excel at selecting safety benchmarks that favor their approaches, while suggesting the real risks lie with competitors who fail to meet these standards. Computer scientists, philosophers, economists, each professional community shapes its own value through narrative, warning of technical catastrophe, revealing moral hazards, or predicting labor market upheaval.

The causal chain of AI safety has thus been inverted: we construct risk narratives first, then deduce technical threats; we design governance frameworks first, then define the problems requiring governance. Defining the problem creates causality. This is not epistemological failure but a new form of power, namely making your risk definition the unquestioned “scientific consensus”. For how we define “artificial general intelligence”, which applications constitute “unacceptable risk”, what counts as “responsible AI”, answers to all these questions will directly shape future technological trajectories, industrial competitive advantages, international market structures, and even the world order itself.

Does this mean AI safety cooperation is doomed to empty talk? Quite the opposite. Understanding the rules of the game enables better participation.

AI risk is constructed. For policymakers, this means advancing your agenda in international negotiations while understanding the genuine concerns and legitimate interests behind others’.

Acknowledging construction doesn’t mean denying reality, regardless of how risks are defined, solid technical research, robust contingency mechanisms, and practical safeguards remain essential. For businesses, this means considering multiple stakeholders when shaping technical standards and avoiding winner-takes-all thinking.

True competitive advantage stems from unique strengths rooted in local innovation ecosystems, not opportunistic positioning. For the public, this means developing “risk immunity”, learning to discern the interest structures and power relations behind different AI risk narratives, neither paralyzed by doomsday prophecies nor seduced by technological utopias.

International cooperation remains indispensable, but we must rethink its nature and possibilities. Rather than pursuing a unified AI risk governance framework, a consensus that is neither achievable nor necessary, we should acknowledge and manage the plurality of risk perceptions. The international community needs not one comprehensive global agreement superseding all others, but “competitive governance laboratories” where different governance models prove their worth in practice. This polycentric governance may appear loose but can achieve higher-order coordination through mutual learning and checks and balances.

We habitually view AI as another technology requiring governance, without realizing it is changing the meaning of “governance” itself. The competition to define AI risk isn’t global governance’s failure but its necessary evolution: a collective learning process for confronting the uncertainties of transformative technology.

The author is an associate professor at the Center for International Security and Strategy, Tsinghua University.

The views don’t necessarily represent those of China Daily.

If you have a specific expertise, or would like to share your thought about our stories, then send us your writings at opinion@chinadaily.com.cn, and comment@chinadaily.com.cn.



Source link

Continue Reading

Trending