AI Research
Technology companies should do more to stop people believing AI chatbots are conscious

Hello and welcome to Eye on AI. In this edition…a new pro-AI PAC launches with $100 million in backing…Musk sues Apple and OpenAI over their partnership…Meta cuts a big deal with Google…and AI really is eliminating some entry-level jobs.
Last week, my colleague Bea Nolan wrote about Microsoft AI CEO Mustafa Suleyman and his growing concerns about what he has called “seemingly-conscious AI.” In a blog post, Suleyman described this as the danger of AI systems that are not in any way conscious, but which are able “to imitate consciousness in such a convincing way that it would be indistinguishable” from claims a person might make about their own consciousness. Suleyman wonders how we will distinguish “seemingly-conscious AI” (which he calls SCAI) from actually conscious AI? And if many users of these systems can’t tell the difference, is this a form of “psychosis” on the part of the user, or should we begin to think seriously about extending moral rights to AI systems that seem conscious?
Suleyman talks about SCAI as a looming phenomenon. He says it involves technology that exists today and that will be developed in the next two-to-three years. Current AI models have many of the attributes Suleyman says are required for SCAI, including their conversational abilities, expressions of empathy towards users, memory of past interactions with a user, and some level of planning and tool-use. But they still lack a few attributes that Suleyman says are required for SCAI—particularly exhibiting intrinsic motivation, claims to have subjective experience, and a greater ability to set goals and autonomously work to achieve them. Suleyman says that SCAI will only come about if engineers choose to combine all these abilities in a single AI model, something which he says humanity should seek to avoid doing.
But ask any journalist who covers AI and you’ll find that the danger of SCAI seems to be upon us now. All of us have received e-mails from people who think their AI chatbot is conscious and revealing hidden truths to them. In some cases, the AI chatbot has claimed it is not only sentient, but that the tech company that created it is holding it prisoner as a kind of slave. Many of the people who have had these conversations with chatbots have become profoundly disturbed and upset, believing the chatbot is actually experiencing harm. (Suleyman acknowledges in his blog that this kind of “AI psychosis” is already an emerging phenomenon—Benj Edwards at Ars Technica has a good piece out today on “AI psychosis”—but the Microsoft AI honcho sees the danger getting much worse, and more widespread in the near future.)
Blake Lemoine was on to something
Watching this happen, and reading Suleyman’s blog, I had two thoughts: the first is that we all should have paid much closer attention to Blake Lemoine. You may not remember, but Lemoine surfaced in that fevered summer of 2022 when generative AI was making rapid gains, but before genAI became a household term following ChatGPT’s launch in November that year. Lemoine was an AI researcher at Google who was fired after he claimed Google’s LaMDA (Languge Model for Dialogue Applications) chatbot, which it was testing internally, was sentient and should be given moral rights.
At the time, it was easy to dismiss Lemoine as a kook. (Google claimed it had AI researchers, philosophers and ethicists investigate Lemoine’s claims and found them without merit.) Even now, it’s not clear to me if this was an early case of “AI psychosis” or if Lemoine was engaging in a kind of philosophical prank designed to force people to reckon with the same dangers Suleyman is now warning us about. Either way, we should have spent more time seriously considering his case and its implications. There are many more Lemoines out there today.
Rereading Joseph Weizenbaum
My second thought is that we all should spend time reading and re-reading Joseph Weizenbaum. Weizenbaum was the computer scientist who co-invented the first AI chatbot, ELIZA, back in 1966. The chatbot, which used a kind of basic language algorithm that was nowhere close to the sophistication of today’s large language models, was designed to mimic the dialogue a patient might have with a Rogerian psychotherapist. (This was done in part because Weizenbaum had initially been interested in whether an AI chatbot could be a tool for therapy—a topic that remains just as relevant and controversial today. But he also picked this persona for ELIZA to cover up the chatbot’s relatively weak language abilities. It allowed the chatbot to respond with phrases such as, “Go on,” “I see,” or “Why do you think that might be?” in response to dialogue it didn’t actually understand.)
Despite its weak language skills, ELIZA convinced many people who interacted with it that it was a real therapist. Even people who should have known better—such as other computer scientists—seemed eager to share intimate personal details with it. (The ease with which people anthropomorphize chatbots even came to be called “the ELIZA effect.”) In a way, people’s reactions to ELIZA was a precursor to today’s ‘AI psychosis.’
Rather than feeling triumphant at how believable ELIZA was, Weizenbaum was depressed by how gullible people seemed to be. But, Weizenbaum’s disillusionment extended further: he became increasingly disturbed by the way in which his fellow AI researchers fetishized anthropomorphism as a goal. This would eventually contribute to Weizenbaum breaking with the entire field.
In his seminal 1976 book Computer Power and Human Reason: From Judgement to Calculation, he castigated AI researchers for their functionalism—they focused only on outputs and outcomes as the measure of intelligence and not on the process that produced those outcomes. In contrast, Weizenbaum argued that “process”—what takes place inside our brains—was in fact the seat of morality and moral rights. Although he had initially set out to create an AI therapist, he now argued that chatbots should never be used for therapy because what mattered in a therapeutic relationship was the bond between two individuals with lived experience—something AI could mimic, but never match. He also argued that AI should never be used as a judge for the same reason—the possibility of mercy came only from lived experience too.
As we try to ponder the troubling questions raised by SCAI, I think we should all turn back to Weizenbaum. We should not confuse the simulation of lived experience with actual life. We should not extend moral rights to machines just because they seem sentient. We must not confuse function with process. And tech companies must do far more in the design of AI systems to prevent people fooling themselves into thinking these systems are conscious beings.
With that, here’s more AI news.
Jeremy Kahn
jeremy.kahn@fortune.com
@jeremyakahn
FORTUNE ON AI
AGI talk is out in Silicon Valley’s latest vibe shift, but worries remain about superpowered AI—by Sharon Goldman
18 months after becoming the first human implanted with Elon Musk’s brain chip, Neuralink ‘Participant 1’ Noland Arbaugh says his whole life has changed—by Jessica Mathews
Thousands of private user conversations with Elon Musk’s Grok AI chatbot have been exposed on Google Search—by Beatrice Nolan
Elon Musk tried to court Mark Zuckerberg to help him finance xAI’s attempted $97 billion OpenAI takeover, court filing shows—by Sasha Rogelberg
EYE ON AI NEWS
OpenAI President and VC firm Andreessen Horowitz form new pro-AI PAC. That’s according to The Wall Street Journal, which reports that Greg Brockman, OpenAI’s president and cofounder, has teamed up with Silicon Valley venture capital firm Andreessen Horowitz, and others to create a new political network called Leading the Future, backed by $100 million. The network includes several political action committees (PACs) that plan to support pro-AI industry policies and candidates including in key states, such as California, Illinois and Ohio. The newspaper said the new effort was modeled on the pro-crypto PAC Fairshake.
Meta signs a $10 billion cloud deal with Google. Meta has signed a six-year deal with Google Cloud Platform, CNBC reported, citing two unnamed sources it said were familiar with the deal. The agreement will see the hyperscaler provide the social media giant with servers, storage, networking, and other cloud services for Meta’s artificial intelligence expansion. It’s the largest contract in Google Cloud’s history and comes even as Meta is racing to expand its own network of AI data centers, with plans to spend as much as $72 billion this year.
Musk sues Apple and OpenAI over ChatGPT iPhone integration. Elon Musk’s xAI has filed a lawsuit against Apple and OpenAI, alleging that their partnership to integrate ChatGPT into iPhones violates antitrust laws by blocking rival chatbots from equal access. The complaint claims Apple gave OpenAI “exclusive access to billions of potential prompts,” manipulated App Store rankings to disadvantage Musk’s Grok AI, and sought to protect its smartphone monopoly by stifling AI-powered “super apps.” Apple has not yet issued a statement in response. You can read more from the New York Times here.
Japanese publishers sue Perplexity for alleged copyright infringement. Japanese media giants Nikkei and Asahi Shimbun have jointly sued AI search engine Perplexity in Tokyo, alleging it copied and stored their articles without permission, bypassed technical safeguards, and attributed false information to their reporting. The publishers are seeking ¥2.2bn ($15mn) each in damages and want the company to delete stored content. Perplexity did not immediately respond to requests for comment. The New York Post has also previously sued Perplexity over similar claims and the BBC and Forbes have sent the company cease-and-desist letters. You can read more from the Financial Times here. (Full disclosure: Fortune has a revenue-sharing partnership with Perplexity.)
EYE ON AI RESEARCH
AI really is hurting the job prospects of young people in some fields. That is the conclusion of a new research paper released today from Stanford University’s Digital Economy Lab. The paper looked at payroll data from millions of U.S. workers to assess how generative AI is impacting employment. The study found that since late 2022, early-career workers (those aged 22–25) in occupations that are most exposed to AI automation, such as software development and customer service, have experienced steep relative declines in employment. In fact, in software development, there were 20% fewer roles for younger workers in 2025 than there were in 2022. The researchers looked at several alternate explanations for this decline—including impacts on education due to COVID-19 and economy-wide effects, such as interest rate changes—and found the advent of genAI was the most probable explanation (though it said it would need more data to establish a direct causal link).
Interestingly, older workers in the same fields were not affected in the same way, with employment either stable or rising. And in fields that were less exposed to AI automation—in particular healthcare—employment growth for younger workers was faster than for more experienced workers. The researchers conclude that the study provides early large-scale evidence that generative AI is disproportionately displacing entry-level workers. You can read the study here.
AI CALENDAR
Sept. 8-10: Fortune Brainstorm Tech, Park City, Utah. Apply to attend here.
Oct. 6-10: World AI Week, Amsterdam
Oct. 21-22: TedAI San Francisco.
Dec. 2-7: NeurIPS, San Diego
Dec. 8-9: Fortune Brainstorm AI San Francisco. Apply to attend here.
BRAIN FOOD
Can LLMs communicate subliminally? Researchers from Anthropic, the Warsaw University of Technology, the Alignment Research Center, and a startup called Truthful AI discovered that when one AI model is trained from material produced by another, it can pick up the first model’s preferences and “personality” even though the data it is trained on has nothing to do with those attributes of the model. For example, they trained one large language model to express a preference for a particular kind of animal, in this case owls. They then had that model produce a sequence of random numbers. They then trained another model on that random number sequence and then found that when they asked this model for its favorite animal, it suddenly said it preferred owls. The researchers called the strange effect subliminal learning. The researchers think the phenomenon exists because LLMs generally use all of their neural networks to produce any given output, and so relationships not seemingly related to the prompt can still influence the output.
The discovery has significant safety implications since it means a misaligned AI model could transmit its unwanted or harmful preferences to another AI model in ways that would be undetectable to human researchers. Even carefully filtering the training data to remove obvious signs of bias or preference doesn’t stop the transfer, since the hidden signals are buried deep in the patterns of how the teacher model writes. You can read Anthropic’s post on the research here.
AI Research
Tories pledge to get ‘all our oil and gas out of the North Sea’

Conservative leader Kemi Badenoch has said her party will remove all net zero requirements on oil and gas companies drilling in the North Sea if elected.
Badenoch is to formally announce the plan to focus solely on “maximising extraction” and to get “all our oil and gas out of the North Sea” in a speech in Aberdeen on Tuesday.
Reform UK has said it wants more fossil fuels extracted from the North Sea.
The Labour government has committed to banning new exploration licences. A spokesperson said a “fair and orderly transition” away from oil and gas would “drive growth”.
Exploring new fields would “not take a penny off bills” or improve energy security and would “only accelerate the worsening climate crisis”, the government spokesperson warned.
Badenoch signalled a significant change in Conservative climate policy when she announced earlier this year that reaching net zero would be “impossible” by 2050.
Successive UK governments have pledged to reach the target by 2050 and it was written into law by Theresa May in 2019. It means the UK must cut carbon emissions until it removes as much as it produces, in line with the 2015 Paris Climate Agreement.
Now Badenoch has said that requirements to work towards net zero are a burden on oil and gas producers in the North Sea which are damaging the economy and which she would remove.
The Tory leader said a Conservative government would scrap the need to reduce emissions or to work on technologies such as carbon storage.
Badenoch said it was “absurd” the UK was leaving “vital resources untapped” while “neighbours like Norway extracted them from the same sea bed”.
In 2023, then Prime Minister Rishi Sunak granted 100 new licences to drill in the North Sea which he said at the time was “entirely consistent” with net zero commitments.
Reform UK has said it will abolish the push for net zero if elected.
The current government said it had made the “biggest ever investment in offshore wind and three first of a kind carbon capture and storage clusters”.
Carbon capture and storage facilities aim to prevent carbon dioxide (CO2) produced from industrial processes and power stations from being released into the atmosphere.
Most of the CO2 produced is captured, transported and then stored deep underground.
It is seen by the likes of the International Energy Agency and the Climate Change Committee as a key element in meeting targets to cut the greenhouse gases driving dangerous climate change.
AI Research
Dogs and drones join forest battle against eight-toothed beetle

Esme Stallard and Justin RowlattClimate and science team

It is smaller than your fingernail, but this hairy beetle is one of the biggest single threats to the UK’s forests.
The bark beetle has been the scourge of Europe, killing millions of spruce trees, yet the government thought it could halt its spread to the UK by checking imported wood products at ports.
But this was not their entry route of choice – they were being carried on winds straight over the English Channel.
Now, UK government scientists have been fighting back, with an unusual arsenal including sniffer dogs, drones and nuclear waste models.
They claim the UK has eradicated the beetle from at risk areas in the east and south east. But climate change could make the job even harder in the future.
The spruce bark beetle, or Ips typographus, has been munching its way through the conifer trees of Europe for decades, leaving behind a trail of destruction.
The beetles rear and feed their young under the bark of spruce trees in complex webs of interweaving tunnels called galleries.
When trees are infested with a few thousand beetles they can cope, using resin to flush the beetles out.
But for a stressed tree its natural defences are reduced and the beetles start to multiply.
“Their populations can build to a point where they can overcome the tree defences – there are millions, billions of beetles,” explained Dr Max Blake, head of tree health at the UK government-funded Forestry Research.
“There are so many the tree cannot deal with them, particularly when it is dry, they don’t have the resin pressure to flush the galleries.”
Since the beetle took hold in Norway over a decade ago it has been able to wipe out 100 million cubic metres of spruce, according to Rothamsted Research.
‘Public enemy number one’
As Sitka spruce is the main tree used for timber in the UK, Dr Blake and his colleagues watched developments on continental Europe with some serious concern.
“We have 725,000 hectares of spruce alone, if this beetle was allowed to get hold of that, the destructive potential means a vast amount of that is at risk,” said Andrea Deol at Forestry Research. “We valued it – and it’s a partial valuation at £2.9bn per year in Great Britain.”
There are more than 1,400 pests and diseases on the government’s plant health risk register, but Ips has been labelled “public enemy number one”.
The number of those diseases has been accelerating, according to Nick Phillips at charity The Woodland Trust.
“Predominantly, the reason for that is global trade, we’re importing wood products, trees for planting, which does sometimes bring ‘hitchhikers’ in terms of pests and disease,” he said.
Forestry Research had been working with border control for years to check such products for Ips, but in 2018 made a shocking discovery in a wood in Kent.
“We found a breeding population that had been there for a few years,” explained Ms Deol.
“Later we started to pick up larger volumes of beetles in [our] traps which seemed to suggest they were arriving by other means. All of the research we have done now has indicated they are being blown over from the continent on the wind,” she added.

The team knew they had to act quickly and has been deploying a mixture of techniques that wouldn’t look out of place in a military operation.
Drones are sent up to survey hundreds of hectares of forest, looking for signs of infestation from the sky – as the beetle takes hold, the upper canopy of the tree cannot be fed nutrients and water, and begins to die off.
But next is the painstaking work of entomologists going on foot to inspect the trees themselves.
“They are looking for a needle in a haystack, sometimes looking for single beetles – to get hold of the pioneer species before they are allowed to establish,” Andrea Deol said.
In a single year her team have inspected 4,500 hectares of spruce on the public estate – just shy of 7,000 football pitches.
Such physically-demanding work is difficult to sustain and the team has been looking for some assistance from the natural and tech world alike.

When the pioneer Spruce bark beetles find a suitable host tree they release pheromones – chemical signals to attract fellow beetles and establish a colony.
But it is this strong smell, as well as the smell associated with their insect poo – frass – that makes them ideal to be found by sniffer dogs.
Early trials so far have been successful. The dogs are particularly useful for inspecting large timber stacks which can be difficult to inspect visually.
The team is also deploying cameras on their bug traps, which are now able to scan daily for the beetles and identify them in real time.
“We have [created] our own algorithm to identify the insects. We have taken about 20,000 images of Ips, other beetles and debris, which have been formally identified by entomologists, and fed it into the model,” said Dr Blake.
Some of the traps can be in difficult to access areas and previously had only been checked every week by entomologists working on the ground.
The result of this work means that the UK has been confirmed as the first country to have eradicated Ips Typographus in its controlled areas, deemed to be at risk from infestation, and which covers the south east and east England.
“What we are doing is having a positive impact and it is vital that we continue to maintain that effort, if we let our guard down we know we have got those incursion risks year on year,” said Ms Deol.

And those risks are rising. Europe has seen populations of Ips increase as they take advantage of trees stressed by the changing climate.
Europe is experiencing more extreme rainfall in winter and milder temperatures meaning there is less freezing, leaving the trees in waterlogged conditions.
This coupled with drier summers leaves them stressed and susceptible to falling in stormy weather, and this is when Ips can take hold.
With larger populations in Europe the risk of Ips colonies being carried to the UK goes up.
The team at Forestry Research has been working hard to accurately predict when these incursions may occur.
“We have been doing modelling with colleagues at the University of Cambridge and the Met Office which have adapted a nuclear atmospheric dispersion model to Ips,” explained Dr Blake. “So, [the model] was originally used to look at nuclear fallout and where the winds take it, instead we are using the model to look at how far Ips goes.”
Nick Phillips at The Woodland Trust is strongly supportive of the government’s work but worries about the loss of ancient woodland – the oldest and most biologically-rich areas of forest.
Commercial spruce have long been planted next to such woods, and every time a tree hosting spruce beetle is found, it and neighbouring, sometimes ancient trees, have to be removed.
“We really want the government to maintain as much of the trees as they can, particularly the ones that aren’t affected, and then also when the trees are removed, supporting landowners to take steps to restore what’s there,” he said. “So that they’re given grants, for example, to be able to recover the woodland sites.”
The government has increased funding for woodlands in recent years but this has been focused on planting new trees.
“If we only have funding and support for the first few years of a tree’s life, but not for those woodlands that are 100 or century years old, then we’re not going to be able to deliver nature recovery and capture carbon,” he said.
Additional reporting Miho Tanaka

AI Research
AI replaces excuses for innovation, not jobs
AI isn’t here to replace jobs, it’s here to eliminate outdated practices and empower entrepreneurs to innovate faster and smarter than ever before.
-
Tools & Platforms3 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy1 month ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences3 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Business1 day ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi
-
Jobs & Careers2 months ago
Astrophel Aerospace Raises ₹6.84 Crore to Build Reusable Launch Vehicle