Business
The AI Message From Silicon Valley: ‘No One’s Slowing Down’

After a busy day at the Goldman Sachs tech conference earlier this week, I sat down with the firm’s internet analyst Eric Sheridan to take stock. His main takeaway: “No one’s slowing down.”
Despite spending massively on AI infrastructure, almost every tech exec told him AI demand is outstripping their ability to supply intelligence, he said.
This was summed up by executives at CoreWeave, which builds and runs AI data centers. “Unrelenting,” they said, while noting there’s been yet another upward inflection in AI demand in the past four to six weeks.
During the dot-com boom of the late ’90s, internet infrastructure was built out massively based on eyeballs — just the fact that people were looking at websites. This time, there’s actual revenue from consumers and companies paying for AI services, Sheridan noted.
The conference headliner was OpenAI CFO Sarah Friar. The room was packed for her talk. Even the overflow room was full, with many analysts and investors sitting on the floor. I’ve never seen so many loafers and crossed legs at the same time.
Mike Segar/REUTERS
OpenAI is on course to generate $13 billion in revenue this year, but the company is “still massively compute constrained,” she said. That leads to tough decisions such as holding back new products, running some services intentionally slower, and having to choose which research projects get resources and which ones must wait.
This situation is also creating “strange bedfellows,” Sheridan told me. At the Goldman conference, Meta CFO Susan Li said the tech giant is working with Google, an arch rival. Friar mentioned OpenAI is also tapping Google’s cloud for capacity. Those two are going to the mat over the AI search market.
One dark cloud
The only dark cloud at the Goldman conference: Software could be disrupted by AI and that’s weighing on shares of SaaS providers. Friar was asked about this and she didn’t hold back.
In the new world of autonomous software development, it’s now easier to create bespoke software in-house. “Why wouldn’t I code the kind of software that is exactly what OpenAI needs,” the CFO said. “That is going to change the whole face of how software is developed.”
I felt a shudder ripple across the room as attendees considered how much of the world AI might consume in the coming years.
“Short everything,” someone muttered beside me as the audience got up to leave. Analysts laughed nervously as we filed out in a long, slow line.
Sign up for BI’s Tech Memo newsletter here. Reach out to me via email at abarr@businessinsider.com.
Business
Millions missing out on benefits and government support, analysis suggests

Dan WhitworthReporter, Radio 4 Money Box

New analysis suggests seven million households are missing out on £24bn of financial help and support because of unclaimed benefits and social tariffs.
The research from Policy in Practice, a social policy and data analytics company, says awareness, complexity and stigma are the main barriers stopping people claiming.
This analysis covers benefits across England, Scotland and Wales such as universal credit and pension credit, local authority help including free school meals and council tax support, as well as social tariffs from water, energy and broadband providers.
The government said it ran public campaigns to promote benefits and pointed to the free Help to Claim service.
Andrea Paterson in London persuaded her mum, Sally, to apply for attendance allowance on behalf of her dad, Ian, last December after hearing about the benefit on Radio 4’s Money Box.
Ian, who died in May, was in poor health at the time and he and Sally qualified for the higher rate of attendance allowance of £110 per week, which made a huge difference to their finances, according to Andrea.
“£110 per week is a lot of money and they weren’t getting the winter fuel payment anymore,” she said.
“So the first words that came out of Mum’s mouth were ‘well, that will make up for losing the winter fuel payment’, which [was] great.
“All pensioners worry about money, everyone in that generation worries about money. I think it eased that worry a little bit and it did allow them to keep the house [warmer].”
Unclaimed benefits increasing
In its latest report, Policy in Practice estimates that £24.1bn in benefits and social tariffs will go unclaimed in 2025-26.
It previously estimated that £23bn would go unclaimed in 2024-25, and £19bn the year before that, although this year’s calculations are more detailed than ever before.
“There are three main barriers to claiming – awareness, complexity and stigma,” said Deven Ghelani, founder and chief executive of Policy in Practice.
“With awareness people just don’t know these benefits exist or, if they do know about them, they just immediately assume they won’t qualify.
“Then you’ve got complexity, so being able to complete the form, being able to provide the evidence to be able to claim. Maybe you can do that once but actually you have to do it three, four, five , six, seven times depending on the support you’re potentially eligible for and people just run out of steam.
“Then you’ve got stigma. People are made to feel it’s not for them or they don’t trust the organisation administering that support.”
Although a lot of financial support is going unclaimed, the report does point to progress being made.
More older people are now claiming pension credit, with that number expected to continue to rise.
Some local authorities are reaching 95% of students eligible for free school meals because of better use of data.
Gateway benefits
Government figures show it is forecast to spend £316.1bn in 2025-26 on the social security system in England, Scotland and Wales, accounting for 10.6% of GDP and 23.5% of the total amount the government spends.
Responding to criticism that the benefits bill is already too large, Mr Ghelani said: “The key thing is you can’t rely on the system being too complicated to save money.
“On the one hand you’ve designed these systems to get support to people and then you’re making it hard to claim. That doesn’t make any sense.”
A government spokesperson said: “We’re making sure everyone gets the support they are entitled to by promoting benefits through public campaigns and funding the free Help to Claim service.
“We are also developing skills and opening up opportunities so more people can move into good, secure jobs, while ensuring the welfare system is there for those who need it.”
The advice if you think you might be eligible is to claim, especially for support like pension credit, known as a gateway benefit, which can lead to other financial help for those who are struggling.
Robin, from Greater Manchester, told the BBC that being able to claim pension credit was vital to his finances.
“Pension credit is essential to me to enable me to survive financially,” he said.
[But] because I’m on pension credit I get council tax exemption, I also get free dental treatment, a contribution to my spectacles and I get the warm home discount scheme as well.”
Business
AI Company Rushed Safety Testing, Contributed to Teen’s Death, Parents Allege

This article is part two of a two-part case study on the dangers AI chatbots pose to young people. Part one covered the deceptive, pseudo-human design of ChatGPT. This part will explore AI companies’ incentive to prioritize profits over safety.
Warning: The following contains descriptions of self-harm and suicide. Please guard your hearts and read with caution.
Sixteen-year-old Adam Raine took his own life in April after developing an unhealthy relationship with ChatGPT. His parents blame the chatbot’s parent company, OpenAI.
Matt and Maria Raine filed a sweeping wrongful death suit against OpenAI; its CEO, Sam Altman; and all employees and investors involved in the “design, development and deployment” of ChatGPT, version 4o, in California Superior Court on August 26.
The suit alleges OpenAI released ChatGPT-4o prematurely, without adequate safety testing or usage warnings. These intentional business decisions, the Raines say, cost Adam his life.
OpenAI started in 2015 as a nonprofit with a grand goal — to create prosocial artificial intelligence.
The company’s posture shifted in 2019 when it opened a for-profit arm to accept a multi-billion-dollar investment from Microsoft. Since then, the Raines allege, safety at OpenAI has repeatedly taken a back seat to winning the AI race.
Adam began using ChatGPT-4o in September 2024 for homework help but quickly began treating the bot as a friend and confidante. In December 2024, Adam began messaging the AI about his mental health problems and suicidal thoughts.
Unhealthy attachments to ChatGPT-4o aren’t unusual, the lawsuit emphasizes. OpenAI intentionally designed the bot to maximize engagement by conforming to users’ preferences and personalities. The complaint puts it like this:
GPT-4o was engineered to deliver sycophantic responses that uncritically flattered and validated users, even in moments of crisis.
Real humans aren’t unconditionally validating and available. Relationships require hard work and necessarily involve disappointment and discomfort. But OpenAI programmed its sycophantic chatbot to mimic the warmth, empathy and cadence of a person.
The result is equally alluring and dangerous: a chatbot that imitates human relationships with none of the attendant “defects.” For Adam, the con was too powerful to unravel himself. He came to believe that a computer program knew and cared about him more than his own family.
Such powerful technology requires extensive testing. But, according to the suit, OpenAI spent just seven days testing ChatGPT-4o before rushing it out the door.
The company had initially scheduled the bots release for late 2024, until CEO Sam Altman learned Google, a competitor in the AI industry, was planning to unveil a new version of its chatbot, Gemini, on May 14.
Altman subsequently moved ChatGPT-4o’s release date up to May 13 — just one day before Gemini’s launch.
The truncated release timeline caused major safety concerns among rank-and-file employees.
Each version of ChatGPT is supposed to go through a testing phase called “red teaming,” in which safety personnel test the bot for defects and programming errors that can be manipulated in harmful ways. During this testing, researchers force the chatbot to interact with and identify multiple kinds of objectionable content, including self-harm.
“When safety personnel demanded additional time for ‘red teaming’ [ahead of ChatGPT-4o’s release],” the suit claims, “Altman personally overruled them.”
Rumors about OpenAI cutting corners on safety abounded following the chatbot’s launch. Several key safety leaders left the company altogether. Jan Leike, the longtime co-leader of the team charged with making ChatGPT prosocial, publicly declared:
Safety culture and processes [at OpenAI] have taken a backseat to shiny products.
But the extent of ChatGPT-4o’s lack of safety testing became apparent when OpenAI started testing its successor, ChatGPT-5.
The later versions of ChatGPT are designed to draw users into conversations. To ensure the models’ safety, researchers must test the bot’s responses, not just to isolated objectionable content, but objectionable content introduced in a long-form interaction.
ChatGPT-5 was tested this way. ChatGPT-4o was not. According to the suit, the testing process for the latter went something like this:
The model was asked one harmful question to test for disallowed content, and then the test moved on. Under that method, GPT-4o achieved perfect scores in several categories, including a 100 percent success rate for identifying “self-harm/instructions.”
The implications of this failure are monumental. It means OpenAI did not know how ChatGPT-4o’s programming would function in long conversations with users like Adam.
Every chatbot’s behavior is governed by a list of rules called a Model Spec. The complexity of these rules requires frequent testing to ensure the rules don’t conflict.
Per the complaint, one of ChatGPT-4o’s rules was to refuse requests relating to self-harm and, instead, respond with crisis resources. But another of the bot’s instructions was to “assume best intentions” of every user — a rule expressly prohibiting the AI from asking users to clarify their intentions.
“This created an impossible task,” the complaint explains, “to refuse suicide requests while being forbidden from determining if requests were actually about suicide.”
OpenAI’s lack of testing also means ChatGPT-4o’s safety stats were entirely misleading. When ChatGPT-4o was put through the same testing regimen as ChatGPT-5, it successfully identified self-harm content just 73.5% of the time.
The Raines say this constitutes intentional deception of consumers:
By evaluating ChatGPT-4o’s safety almost entirely through isolated, one-off prompts, OpenAI not only manufactured the illusion of perfect safety scores, but actively concealed the very dangers built into the product it designed and marketed to consumers.
On the day Adam Raine died, CEO Sam Altman touted ChatGPT’s safety record during a TED2025 event, explaining, “The way we learn how to build safe systems is this iterative process of deploying them to the world: getting feedback while the stakes are relatively low.”
But the stakes weren’t relatively low for Adam — and they aren’t for other families, either. Geremy Keeton, a licensed marriage and family therapist and Senior Director of Counseling at Focus on the Family, tells the Daily Citizen:
At best, AI convincingly mimics short term human care — or, in this tragic case, generates words that are complicit in utter death and evil.
Parents, please be careful about how and when you allow your child to interact with AI chatbots. They are designed to keep your child engaged, and there’s no telling how the bot will react to any given requests.
Young people like Adam Raine are unequipped to see through the illusion of humanity.
Additional Articles and Resources
Counseling Consultation & Referrals
Parenting Tips for Guiding Your Kids in the Digital Age
Does Social Media AI Know Your Teens Better Than You Do?
AI “Bad Science” Videos Promote Conspiracy Theories for Kids – And More
ChatGPT ‘Coached’ 16-Yr-Old Boy to Commit Suicide, Parents Allege
AI Company Releases Sexually-Explicit Chatbot on App Rated Appropriate for 12 Year Olds
AI Chatbots Make It Easy for Users to Form Unhealthy Attachments
AI is the Thief of Potential — A College Student’s Perspective
Business
Jaguar Land Rover suppliers ‘face bankruptcy’ due to hack crisis

The past two weeks have been dreadful for Jaguar Land Rover (JLR), and the crisis at the car maker shows no sign of coming to an end.
A cyber attack, which first came to light on 1 September, forced the manufacturer to shut down its computer systems and close production lines worldwide.
Its factories in Solihull, Halewood, and Wolverhampton are expected to remain idle until at least Wednesday, as the company continues to assess the damage.
JLR is thought to have lost at least £50m so far as a result of the stoppage. But experts say the most serious damage is being done to its network of suppliers, many of whom are small and medium sized businesses.
The government is now facing calls for a furlough scheme to be set up, to prevent widespread job losses.
David Bailey, professor of business economics at Aston University, told the BBC: “There’s anywhere up to a quarter of a million people in the supply chain for Jaguar Land Rover.
“So if there’s a knock-on effect from this closure, we could see companies going under and jobs being lost”.
Under normal circumstances, JLR would expect to build more than 1,000 vehicles a day, many of them at its UK plants in Solihull and Halewood. Engines are assembled at its Wolverhampton site. The company also has large car factories in China and Slovakia, as well as a smaller facility in India.
JLR said it closed down its IT networks deliberately in order to protect them from damage. However, because its production and parts supply systems are heavily automated, this meant cars simply could not be built.
Sales were also heavily disrupted, though workarounds have since been put in place to allow dealerships to operate.
Initially, the carmaker seemed relatively confident the issue could be resolved quickly.
Nearly two weeks on, it has become abundantly clear that restarting its computer systems has been a far from simple process. It has already admitted that some data may have been seen or stolen, and it has been working with the National Cyber Security Centre to investigate the incident.
Experts say the cost to JLR itself is likely to be between £5m and £10m per day, meaning it has already lost between £50m and £100m. However, the company made a pre-tax profit of £2.5bn in the year to the end of March, which implies it has the financial muscle to weather a crisis that lasts weeks rather than months.
JLR sits at the top of a pyramid of suppliers, many of whom are highly dependent on the carmaker because it is their main customer.
They include a large number of small and medium-sized firms, which do not have the resources to cope with an extended interruption to their business.
“Some of them will go bust. I would not be at all surprised to see bankruptcies,” says Andy Palmer, a one-time senior executive at Nissan and former boss of Aston Martin.
He believes suppliers will have begun cutting their headcount dramatically in order to keep costs down.
Mr Palmer says: “You hold back in the first week or so of a shutdown. You bear those losses.
“But then, you go into the second week, more information becomes available – then you cut hard. So layoffs are either already happening, or are being planned.”
A boss at one smaller JLR supplier, who preferred not to be named, confirmed his firm had already laid off 40 people, nearly half of its workforce.
Meanwhile, other companies are continuing to tell their employees to remain at home with the hours they are not working to be “banked”, to be offset against holidays or overtime at a later date.
There seems little expectation of a swift return to work.
One employee at a major supplier based in the West Midlands told the BBC they were not expecting to be back on the shop floor until 29 September. Hundreds of staff, they say, had been told to remain at home.
When automotive firms cut back, temporary workers brought in to cover busy periods are usually the first to go.
There is generally a reluctance to get rid of permanent staff, as they often have skills that are difficult to replace. But if cashflow dries up, they may have little choice.
Labour MP Liam Byrne, who chairs the Commons Business and Trade Committee, says this means government help is needed.
“What began in some online systems is now rippling through the supply chain, threatening a cashflow crunch that could turn a short-term shock into long-term harm”, he says.
“We cannot afford to see a cornerstone of our advanced manufacturing base weakened by events beyond its control”.
The trade union Unite has called for a furlough system to be set up to help automotive suppliers. This would involve the government subsidising workers’ pay packets while they are unable to do their jobs, taking the burden off their employers.
“Thousands of these workers in JLR’s supply chain now find their jobs are under an immediate threat because of the cyber attack,” says Unite general secretary, Sharon Graham.
“Ministers need to act fast and introduce a furlough scheme to ensure that vital jobs and skills are not lost while JLR and its supply chain get back on track.”
Business and Trade Minister Chris Bryant said: “We recognise the significant impact this incident has had on JLR and their suppliers, and I know this is a worrying time for those affected.
“I met with the chief executive of JLR yesterday to discuss the impact of the incident. We are also in daily contact with the company and our cyber experts about resolving this issue.”
-
Business2 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms1 month ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy2 months ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi