AI Insights
Big tech has spent $155bn on AI this year. It’s about to spend hundreds of billions more | Artificial intelligence (AI)

The US’s largest companies have spent 2025 locked in a competition to spend more money than one another, lavishing $155bn on the development of artificial intelligence, more than the US government has spent on education, training, employment and social services in the 2025 fiscal year so far.
Based on the most recent financial disclosures of Silicon Valley’s biggest players, the race is about to accelerate to hundreds of billions in a single year.
Over the past two weeks, Meta, Microsoft, Amazon, and Alphabet, Google’s parent, have shared their quarterly public financial reports. Each disclosed that their year-to-date capital expenditure, a figure that refers to the money companies spend to acquire or upgrade tangible assets, already totals tens of billions.
Capex, as the term is abbreviated, is a proxy for technology companies’ spending on AI because the technology requires gargantuan investments in physical infrastructure, namely data centers, which require large amounts of power, water and expensive semiconductor chips. Google said during its most recent earnings call that its capital expenditure “primarily reflects investments in servers and data centers to support AI”.
Meta’s year-to-date capital expenditure amounted to $30.7bn, doubling the $15.2bn figure from the same time last year, per its earnings report. For the most recent quarter alone, the company spent $17bn on capital expenditures, also double the same period in 2024, $8.5bn. Alphabet reported nearly $40bn in capex to date for the first two quarters of the current fiscal year, and Amazon reported $55.7bn. Microsoft said it would spend more than $30bn in the current quarter to build out the data centers powering its AI services. Microsoft CFO Amy Hood said the current quarter’s capex would be at least 50% more than the outlay during the same period a year earlier and greater than the company’s record capital expenditures of $24.2bn in the quarter to June.
“We will continue to invest against the expansive opportunity ahead,” Hood said.
For the coming fiscal year, big tech’s total capital expenditure is slated to balloon enormously, surpassing the already eye-popping sums of the previous year. Microsoft plans to unload about $100bn on AI in the next fiscal year, CEO Satya Nadella said Wednesday. Meta plans to spend between $66bn and $72bn. Alphabet plans to spend $85bn, significantly higher than its previous estimation of $75bn. Amazon estimated that its 2025 expenditure would come to $100bn as it plows money into Amazon Web Services, which analysts now expect to amount to $118bn. In total, the four tech companies will spend more than $400bn on capex in the coming year, according to the Wall Street Journal.
The multibillion-dollar figures represent mammoth investments, which the Journal points out is larger than the European Union’s quarterly spending on defense. However, the tech giants can’t seem to spend enough for their investors. Microsoft, Google and Meta informed Wall Street analysts last quarter that their total capex would be higher than previously estimated. In the case of all three companies, investors were thrilled, and shares in each company soared after their respective earnings calls. Microsoft’s market capitalization hit $4tn the day after its report.
Even Apple, the cagiest of the tech giants, signaled that it would boost its spending on AI in the coming year by a major amount, either via internal investments or acquisitions. The company’s quarterly capex rose to $3.46bn, up from $2.15bn during the same period last year. The iPhone maker reported blockbuster earnings Thursday, with rebounding iPhone sales and better-than-expected business in China, but it is still seen as lagging farthest behind on development and deployment of AI products among the tech giants.
Tim Cook, Apple’s CEO, said Thursday that the company was reallocating a “fair number” of employees to focus on artificial intelligence and that the “heart of our AI strategy” is to increase investments and “embed” AI across all of its devices and platforms. Cook refrained from disclosing exactly how much Apple is spending, however.
after newsletter promotion
“We are significantly growing our investment, I’m not putting specific numbers behind that,” he said.
Smaller players are trying to keep up with the incumbents’ massive spending and capitalize on the gold rush. OpenAI announced at the end of the week of earnings that it had raised $8.3bn in investment, part of a planned $40bn round of funding, valuing the startup, whose ChatGPT chatbot kicked in 2022, at $300bn.
AI Insights
Anthropic makes its pitch to DC, warning China is ‘moving even faster’ on AI

Anthropic is on a mission this week to set itself apart in Washington, pitching the government’s adoption of artificial intelligence as a national security priority while still emphasizing transparency and basic guardrails on the technology’s rapid development.
The AI firm began making the rounds in Washington, D.C., on Monday, hosting a “Futures Forum” event before company co-founders Jack Clark and Dario Amodei head to Capitol Hill to meet with policymakers.
Anthropic is one of several leading AI firms seeking to expand its business with the federal government, and company leaders are framing the government’s adoption of its technology as a matter of national security.
“American companies like Anthropic and other labs are really pushing the frontiers of what’s possible with AI,” Kate Jensen, Anthropic’s head of sales and partnerships, said during Monday’s event. “But other countries, particularly China, are moving even faster than we are on adoption. They are integrating AI as government services, industrial processes and citizen interactions at massive scale. We cannot afford to develop the world’s most powerful technology and then be slow to deploy it.”
Because of this, Jensen said adoption of AI into the government is “particularly crucial.” According to the Anthropic executive, hundreds of thousands of government workers are already using Claude, but many ideas are “still left untapped.”
“AI provides enormous opportunity to make government more efficient, more responsive and more helpful to all Americans,” she said. “Our government is adopting Claude at an exciting pace, because you too see the paradigm shift that’s happening and realize how much this technology can help all of us.”
Her comments come as the Trump administration urges federal agencies to adopt automation tools and improve workflows. As part of a OneGov deal with the General Services Administration, Anthropic is offering its Claude for Enterprise and Claude for Government models to agencies for $1 for one year.
According to Jensen, the response to the $1 deal has been “overwhelming,” with dozens of agencies expressing interest in the offer. Anthropic’s industry competitors, like OpenAI and Google, also announced similar deals with the GSA to offer their models to the government for a severely discounted price.
Beyond the GSA deal, Anthropic’s federal government push this year has led to its models being made available to U.S. national security customers and staff at the Lawrence Livermore National Lab.
Anthropic’s Claude for Government models have FedRAMP High certification and can be used by federal workers dealing with sensitive, unclassified work. The AI firm announced in April that it partnered with Palantir through the company’s FedStart program, which assists with FedRAMP compliance.
Jensen pointed specifically to Anthropic’s work at the Pentagon’s Chief Digital and AI Office. “We’re leveraging our awarded OTA [other transaction agreement] to scope pilots, we’re bringing our frontier technology and our technical teams to solve operational problems directly alongside the warfighter and to help us all move faster.”
However, as companies including Anthropic seize the opportunity to collaborate with the government, Amodei emphasized the need for “very basic guardrails.” Congress has grappled with how to regulate AI for months, but efforts have stalled amid fierce disagreements.
“We absolutely need to beat China and other authoritarian countries; that is why I’ve advocated for the export controls. But we need to not destroy ourselves in the process,” Amodei said during his fireside chat with Clark. “The thing we’ve always advocated for is basic transparency requirements around models. We always run tests on the models. We reveal the test to the world. We make a point of them, we’re trying to see ahead to the dangers that we present in the future.”
The view is notably different from some of Anthropic’s competitors, which are instead pushing for light-touch regulation of the technology. Amodei, on the other hand, said a “basic transparency requirement” would not hamper innovation, as some other companies have suggested.
AI Insights
California bill to regulate high-risk AI fails to advance state legislature

A California artificial intelligence bill addressing the use of automated decision systems in hiring and other consequential matters failed to advance in the state assembly during the final hours of the 2025 legislative session Friday.
The bill (AB 1018) would have required companies and government agencies to notify individuals when automated decision systems were used for “consequential decisions,” such as employment, housing, health care, and financial services.
Democratic assemblymember Rebecca Bauer-Kahan, the bill’s author, paused voting on the bill until next year to allow for “additional stakeholder engagement and productive conversations with the Governor’s office,” according to a Friday press release from her office.
“This pause reflects our commitment to getting this critical legislation right, not a retreat from our responsibility to protect Californians,” Bauer-Kahan said in a statement. “We remain committed to advancing thoughtful protections against algorithmic discrimination.”
The Business Software Alliance, a global trade association that represents large technology companies and led an opposition campaign against the bill, argued that the legislation would have unfairly subjected companies using AI systems “into an untested audit regime” that risked discouraging responsible adoption of AI tools throughout the state.
“Setting clear, workable, and consistent expectations for high-risk uses of AI ultimately furthers the adoption of technology and more widely spreads its benefits,” Craig Albright, senior vice president at BSA, told StateScoop in a written statement. “BSA believes there is a path forward that sets obligations for companies based on their different roles within the AI value chain and better focuses legislation to ensure that everyday and low-risk uses of AI are not subjected to a vague and confusing regulatory regime.”
Since it was introduced in February, the bill was amended to narrow when AI audits are required, clarify what kinds of systems and “high-stakes” decisions are covered, exempt low-risk tools like spam filters, and add protections for trade secrets while limiting what audit details must be made public. It also refined how lawsuits and appeals work and aligned the bill more clearly with existing civil rights laws.
AB 1018’s failure comes on the heels of the Colorado state legislature voting to delay implementing the Colorado AI Act, the state’s high-risk artificial intelligence legislation, until the end of June next year, five months after the law was supposed to go into effect. Similar to California’s AI bill, Colorado’s Artificial Intelligence Act would also regulate high-risk AI systems in areas like hiring, lending, housing, insurance and government services.
AI Insights
The Despair of the Teacher in the Age of Artificial Intelligence – Commentary Magazine

There may still be a few sheltered analog folk out there who pronounce the abbreviation for Artificial Intelligence, AI, like the name of the steak sauce, mistaking the “I” for a “1,” but the rest of us are very much aware that it is already playing a role in every…
-
Business2 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms1 month ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy2 months ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers3 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education3 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi
-
Funding & Business3 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries