When the members of the US House of Representatives voted last month for President Trump’s budget reconciliation legislation, aka the one big beautiful bill, many were not aware that the nearly 900-page document contained a 450-word section with an unusual provision: a moratorium that would prevent states from enforcing regulations or introducing new laws on AI systems for 10 years.
But once it was noticed, the outcry was swift. “A lot of people were just caught by surprise,” said Democratic Sen. Maria Cantwell of Washington, ranking member of the Senate Committee on Commerce, Science, and Transportation. “When I voted for the one big beautiful bill, I didn’t know about this clause,” Georgia representative Marjorie Taylor Greene lamented.
The surprise moratorium sparked strong opposition, uniting an unlikely group across the political spectrum—from Republican and Democrat lawmakers to child safety advocates and civil rights groups, and … right-wing firebrand commentator Steve Bannon. Many saw the moratorium, pushed by some of the top AI developers, as a trojan horse that would violate states’ rights and prevents them from addressing the existing and future dangers of artificial intelligence.
In the absence of overarching federal regulation on AI and other emerging information technologies, some members of Congress and several states have stepped in, trying to fill the gaps with laws to protect against specific harms like AI-generated pornography, deep-fakes designed to mislead voters and consumers, spam phone calls, and housing rents set by algorithms. Examples include the Take it Down Act (sponsored by Republican Sen. Ted Cruz of Texas and prohibiting “the nonconsensual online publication of intimate visual depictions of individuals”), the Kids Online Safety Act (sponsored by Democratic Sen. Richard Blumenthal of New York and aimed at protecting “the safety of children on the internet”), and the Elvis Act, a Tennessee law that expands the state’s Protection of Personal Rights law “to include protections or songwriters, performers, and music industry professionals’ voice from the misuse of artificial intelligence (AI).”
State legislators and attorneys general of both parties said the moratorium would not only have decimated the progress states have made but would leave them unable to protect their constituents over the next decade, when “AI will raise some of the most important public policy questions of our time,” 260 state legislators wrote in an open letter. Similarly, 40 state attorneys general wrote that “[t]he impact of such a broad moratorium would be sweeping and wholly destructive of reasonable state efforts to prevent known harms associated with AI.”
Objections intensified as the Senate struggled to pass a reconciliation bill this week, and on Tuesday, it struck down the moratorium by a 99 to one vote. The House just voted to pass the moratorium-free Senate version of the one big beautiful bill.
For the proponents of the moratorium, primarily Big Tech companies and foreign policy think tanks, there’s a justification that warrants taking such an extreme and evidently unpopular action. A patchwork of state laws on AI is, they say, costly and time-consuming to navigate, resulting in an unnecessary burden that stifles innovation and slows progress. The risk of losing AI dominance to China is an even more dangerous threat—it is an “existential threat,” with implications for national security and even future of the world.
“While not someone I’d typically quote, Vladimir Putin has said that whoever prevails will determine the direction of the world going forward,” Chris Lehane, OpenAI’s chief global affairs officer wrote on LinkedIn last week.
Concerns around speed of progress and competition with China have now become the dominant view on policy and strategy in Big Tech, business groups, and venture capitalists.
But that view is also increasingly coming under scrutiny. Critics, for example, question the claim that China is steaming ahead unencumbered by regulations. In reality, they argue, the AI regulatory environment in China is much more restrictive than in the United States. The whole underlying premise itself has also come under question: Are the world’s biggest powers really locked in an AI race aiming for global dominance? Does the race have to be a zero-sum game? Is this narrative even real, or dangerous fiction?
How do regulatory environments in the United States and China actually compare? In his post, Lehane of OpenAI also wrote: “America’s top AI competitor, the PRC [Peoples Republic of China], is moving full-steam ahead with few if any restrictions and significant government backing.”
It’s a bold claim. But not everyone is buying it.
“It’s a complete myth,” says Gary Marcus, AI scholar and professor emeritus at NYU. “China has more regulation than we do. The reality is completely different from the talking points that you’ll hear from certain venture capitalists and so on.”
China began developing AI regulations in 2023. These administrative regulations cover many areas of emerging information technology, including generative AI, deepfakes and synthetic media, recommendation algorithms, and curating algorithms on search engines and social media. Moreover, in China, firms that develop and train AI models must submit technical details of the model to a central registry.
“I will tell you, from having close contact with compliance lawyers in big Chinese tech firms, that their regulatory environment is very intensive,” says Gilad Abiri, assistant professor of law at Peking University School of Transnational Law and Affiliate Fellow at the Information Society Project at Yale Law School.
In fact, Chinese AI developers are responsible for a lot of compliance work. “In that regard, it’s already now a very different world the way they operate,” Abiri says. “The US would have to do very dramatic regulations, including passing data privacy law, to get anywhere near what Chinese firms have to comply with.” Public rollout of various large language models have been delayed in China, pending regulatory approvals. Ernie Bot, an AI chatbot developed by the Chinese tech company Baidu, Inc. saw a 6-month delay.
The Chinese are also aware of the need to stay competitive, and they modify regulations to keep from crippling their companies in the global competition, Abiri says. For example, initially China decided against making generative-AI chatbots available to the public. But eventually it changed course. Now chatbots such as DeepSeek’s, which works like OpenAI’s ChatGPT, exist. Although DeepSeek’s models are open source, they can be run locally on the user’s computer as opposed to cloud-based servers, and are available to anyone to use freely to develop new applications.
American tech developers, however, have argued that any delay caused by doing compliance work could lead to the United States lagging China in AI developments. But critics ask just how much that actually matters. “Lag only matters if there is a competition, and there is a competition that can be won decisively, and that competitive advantage can be kept in some way,” Abiri says.
Under current circumstances, where large language models are a commodity and everyone is playing by the same playbook, it is not clear how anyone can get anything beyond a very short-lived advantage, Marcus says: “This model is better than that one for two weeks, or this other one’s better for four.”
An imaginary race? The AI race narrative emphasizes that countries must engage in zero-sum thinking to control the future by out-competing other countries. But Tiffany Li, associate professor of law at the University of San Francisco School of Law, argues that this narrative is not accurate—its underlying assumptions are baseless, as it is yet unknown what society may gain from AI: “The future of artificial intelligence is not a zero-sum game—or, at least, it does not have to be.”
The evidence for the arms-race AI narrative is weak, and it is being mostly promoted in the West, often by actors who stand to benefit directly from it, according to Seán S. ÓhÉigeartaigh, program director at the Centre for the Future of Intelligence at the University of Cambridge, who’s been studying Chinese AI governance since 2017. “By framing AI as a winner-takes-all technology, proponents create a powerful imperative for action while limiting critical examination of the overall premise,” ÓhÉigeartaigh says. The framing of AI development as a high-stakes race and an existential security threat to a given nation allows justifying extraordinary measures that bypass safety regulations and oversight and reduce legal costs for Big Tech.
“All the corporations wanted this so that they could save money, but the fact is the federal government has not made a coherent AI policy, and the states are trying to protect their citizens, and they should be able to do that,” Marcus says. “The fact that [the Senate] struck this crazy provision is a huge victory for humanity over corporate lobbying.”
The vacuum of federal regulation. The proposed moratorium on state-level regulation of AI was, in a way, a non-act. Even those who don’t believe state level regulation is always suitable for borderless technologies like AI still expect that the United States will put a national regulatory system in place.
“I’m optimistic about the potential for AI, and I take a back seat to no one about believing that we are, in fact, in a global race, that there are important national security implications,” Satya Thallam, senior advisor at Americans for Responsible Innovation, in Washington DC, said in a virtual roundtable. This moratorium proposal however was not replacing what the states are doing with a uniform national framework; it was replacing state-level AI regulations with nothing, he said.
Others agree. “There’s a difference between writing a federal set of guidelines around AI that would preempt states, and what this is, which is a federal moratorium on any state legislation without replacing it with anything,” Adam Kovacevich, CEO of the tech policy group Chamber of Progress and former Google US policy chief, told ABC News.
“To take the step to say we are not doing anything, and we’re going to prevent the states from doing anything is, as far as I know, unprecedented,” Larry Norden, the vice president of the Elections and Government Program at the Brennan Center in New York told NBC News. “Given the stakes with this technology, it’s really dangerous,”
Some observers are optimistic that Americans will not find themselves in a completely unregulated environment in the future, given that people already understand the high costs of not regulating social media. Also, many actors outside the United States want to regulate AI, two of which are digital empires—China and the European Union. “The bottom line,” Abiri says, “is that this is not a competition that can be won. It’s a competition that everyone can lose, fundamentally so, if we do not regulate this.”
In fact, the odds of a federal-level regulatory framework happening someday may be higher now that this moratorium failed and the “patchwork” problem remains, according to Gregory C. Allen, senior adviser with the Wadhwani AI Center. Then, “the government can say with a straight face, we don’t need state action, because here is the federal action.”
Editor’s note: This piece was produced with support from the Future of Life Institute.