The Markup, now a part of CalMatters, uses investigative reporting, data analysis, and software engineering to challenge technology to serve the public good. Sign up for Klaxon, a newsletter that delivers our stories and tools directly to your inbox.
The U.S. Senate voted not to interfere with state artificial intelligence regulations, defeating a 10-year moratorium on such laws that had earlier cleared the House and alarmed California officials.
The 99-1 vote to strip the moratorium from the president’s “big, beautiful” budget bill followed opposition from a handful of Republicans. Dissenting from GOP colleagues, they argued the measure would allow the proliferation of highly realistic, AI-enabled “deepfake” impersonation videos, endanger jobs and infringe on the rights of state governments.
The small rebellion was enough to seal the moratorium’s fate, given Republicans’ slim majority in the Senate and united opposition from Democrats.
“Federalism is preserved and humans are safe for now,” one of the Republican dissenters, Rep. Marjorie Taylor Greene of Georgia, wrote following the vote.
Key to the moratorium’s defeat was pressure from a wide range of advocacy groups, said Ben Winters of the Consumer Federation of America. Organizations ranging from the Teamsters to the NAACP to the National Association of Evangelicals opposed the measure.
“The tech lobby attempted to manipulate the legislative process, but they were soundly defeated,” Winters wrote. “This outcome sends a clear signal that special interest groups cannot simply override public concerns with backroom deals, and that everyday people can successfully challenge corporate overreach.”
California state Sen. Josh Becker, a Democrat from Menlo Park who authored an AI disclosure law, said he was encouraged that Congress “came to its senses.”
“This decision preserves California’s ability to lead on consumer privacy and protections while cultivating the responsible development of AI that serves the public interest,” he wrote in an email message. “In the absence of a strong federal standard, states must retain the flexibility to advance AI in ways that do not compromise safety, privacy, or the rights of our residents.”
Prior to its defeat, the moratorium had already been watered down in the Senate, reduced in duration to five years and in scope to states that wanted access to $500 million in new federal funding for AI infrastructure and broadband deployment.
The battle over the measure underlined the bipartisan nature of concern around AI.
“This is how you take on big tech!” Arkansas Gov. Sarah Huckabee Sanders, a Republican, wrote on social media after the Senate vote.
The centerpiece of the so-called “One Big Beautiful Bill” in tech policy circles was the “AI moratorium,” a temporary federal limit on state regulation of artificial intelligence. The loss of the AI moratorium, stripped from the bill in the Senate, elicited howls of derision from AI-focused policy experts such as the indefatigable Adam Thierer. But the moratorium debate may have distracted from an important principle: Regulation should be technology neutral. The concept of AI regulation is essentially broken, and neither states nor Congress should regulate AI as such.
Nothing is straightforward. The AI moratorium was not a moratorium at all. Contorted to fit into a budget reconciliation bill, it meant to disincentivize regulation by withholding federal money for 10 years from states that are “limiting, restricting, or otherwise regulating artificial intelligence models.”
Vi Adobe Stock.
It is economically unwise for states to regulate products and services offered nationally or globally. When they do so unevenly, a thicket of regulations and lost innovation is likely. Compliance costs rise disproportionately relative to the benefits of protections that more efficient laws could achieve.
But I’m ordinarily a stout defender of the decentralized system created by our Constitution. I believe it is politically unwise to move power to remote levels of government. With Geoff Manne, I’ve written about avoiding burdensome state regulation through contracts rather than preemption of state law. So before the House AI Task Force’s meeting to consider federalism and preemption, I was in the “mushy middle.”
With the moratorium gone, federal AI regulation would justify preempting the states, giving us efficient regulation, right? Nothing is straightforward.
Nobody—including at the federal level—actually knows what they are trying to regulate. Take a look at the definition of AI in the Colorado legislation, famously signed yet lamented by tech-savvy governor Jared Polis. In Colorado, “Artificial Intelligence System” means
any machine-based system that, for any explicit or implicit objective, infers from the inputs the system receives how to generate outputs, including content, decisions, predictions, or recommendations, that can influence physical or virtual environments.
Try excluding an ordinary light switch from the definition. Must you wrestle with semantics? I’m struck by the meaningless dualities. Take “explicit or implicit objective.” Is there a third category? Or are these words meant to conjure some unidentified actor’s intent? See also “physical or virtual environments.” (Do you want to change all four tires? No, just the front two and back two.) Someone thought extra words would add meaning, but they actually confess its absence.
Defining AI is fraught because “artificial intelligence” is a marketing term, not a technology. For policymaking purposes, it’s an “anti-concept.” When “AI” took flight in the media, countless tech companies put it on their websites and in their sales pitches. That doesn’t mean that AI is an identifiable, regulatable thing.
So pieces of legislation like those in Colorado, New York, and Texas use word salads to regulate anything that amounts to computer-aided decision-making. Doing so will absorb countless hours as technologists and businesspeople consult with lawyers to parse statutes rather than building better products. And just think of the costs and complexities—and the abuses—when these laws turn out to regulate all decision-making that involves computers.
Technologies and marketing terms change rapidly. Human interests don’t. That’s why technology-neutral regulation is the best form—regulation that punishes bad outcomes no matter the means. Even before this age of activist legislatures, the law already barred killing people, whether with a hammer, an automobile, an automated threshing machine, or some machine that runs “AI.”
The Colorado legislation is a gaudy, complex, technology-specific effort to prevent wrongful discrimination. That is better done by barring discrimination as such, a complex problem even without the AI overlay. New York’s legislation is meant to help ensure that AI doesn’t kill people—a tiny but grossly hyped possibility. Delaying the adoption of AI through regulations like New York’s will probably kill more people (statistically, by denying life-extending innovations) than the regulations save.
Texas—well, who knows what the Texas bill is trying to do.
The demise of the AI moratorium will incline some to think that federal AI regulation is the path forward because it may preempt unwise state regulation. But federal regulation would not be any better. It would be worse in an important respect—slower and less likely to change with experience.
The principle of technology-neutral regulation suggests that there should not be any AI regulation at all. Rather, the law should address wrongs as wrongs no matter what instruments or technologies have a role in causing them.
An artificial intelligence furore that’s consuming Singapore’s academic community reveals how we’ve lost the plot over the role the hyped-up technology should play in higher education.
A student at Nanyang Technological University said in a Reddit post that she used a digital tool to alphabetize her citations for a term paper. When it was flagged for typos, she was then accused of breaking the rules over the use of Generative AI for the assignment. It snowballed when two more students came forward with similar complaints, one alleging that she was penalized for using ChatGPT to help with initial research, even though she says she did not use the bot to draft the essay.
The school, which publicly states it embraces AI for learning, initially defended its zero-tolerance stance in this case in statements to local media. But internet users rallied around the original Reddit poster and rejoiced at an update that she won an appeal to rid her transcript of the ‘academic fraud’ label.
It may sound like a run-of-the-mill university dispute. But there’s a reason the saga went so viral, garnering thousands of upvotes and heated opinions from online commentators. It has laid bare the strange new world we’ve found ourselves in, as students and faculty are rushing to keep pace with how AI should or shouldn’t be used in universities.
It’s a global conundrum, but the debate has especially roiled Asia. Stereotypes of math nerds and tiger moms aside, a rigorous focus on tertiary studies is often credited for the region’s dramatic economic rise. The importance of education—and long hours of studying—is instilled from the earliest age. So how does this change in the AI era? The reality is that nobody has the answer yet.
Despite promises from ed-tech leaders that we’re on the cusp of ‘the biggest positive transformation that education has ever seen,’ the data on academic outcomes hasn’t kept pace with the technology’s adoption. There are no long-term studies on how AI tools impact learning and cognitive functions—and viral headlines that it could make us lazy and dumb only add to the anxiety. Meanwhile, the race to not be left behind in implementing the technology risks turning an entire generation of developing minds into guinea pigs.
For educators navigating this moment, the answer is not to turn a blind eye. Even if some teachers discourage the use of AI, it has become all but unavoidable for many scholars doing research in the internet age.
Most Google searches now lead with automated summaries. Scrolling through these should not count as academic dishonesty. An informal survey of 500 Singaporean students from secondary school through university conducted by a local news outlet this year found that 84% were using products like ChatGPT for homework on a weekly basis.
In China, many universities are turning to AI cheating detectors, even though the technology is imperfect. Some students are reporting on social media that they have to dumb down their writing to pass these tests or shell out cash for such detection tools themselves to ensure they beat them before submitting their papers.
It doesn’t have to be this way. The chaotic moment of transition has put new onus on educators to adapt and shift the focus on the learning process as much as the final results, Yeow Meng Chee, the provost and chief academic and innovation officer at the Singapore University of Technology and Design, tells me. This does not mean villainizing AI, but treating it as a tool and ensuring a student understands how they arrived at their final conclusion even if they used technology. This process also helps ensure the AI outputs, which remain imperfect and prone to hallucinations (or typos), are checked and understood.
Ultimately, professors who make the biggest difference aren’t those who improve exam scores but who build trust, teach empathy and instil confidence in students to solve complex problems. The most important parts of learning still can’t be optimized by a machine.
The Singapore saga shows how everyone is on edge and whether a reference-sorting website even counts as a generative AI tool isn’t clear. It also exposed another irony: Saving time on a tedious task would likely be welcomed when the student enters the workforce—if the technology hasn’t already taken her entry-level job.
Demand for AI literacy in the labour market is becoming a must-have and universities ignoring it would do a disservice to student cohorts entering the real world.
Hungarian researchers have used AI-inspired mathematical models to explore how human memory works. Their study shows that surprising experiences play a uniquely important role in learning, challenging older theories about what the brain should remember.
Surprising experiences play a crucial role in learning, say researchers from Hungary’s HUN-REN Wigner Research Centre and Germany’s Max Planck Institute. Using mathematical models developed in artificial intelligence research, they found that unusual events help the brain update its understanding of the world more efficiently than routine experiences.
The findings, published in Nature Reviews Psychology, challenge the traditional view that rare or unexpected memories are less ‘worth storing’. Instead, the study argues that it is precisely these moments—those that deviate just enough from the norm—that serve as anchors for deeper learning.
‘Memory isn’t flawless. Sometimes, we remember things that never actually happened,’ the researchers wrote in a statement by the Hungarian Research Network (HUN-REN). But these recurring ‘mistakes’ can actually help uncover the principles that govern how memory works—and why certain details stick while others fade.
The team, led by Gergő Orbán of the HUN-REN Wigner Centre, and working with Dávid Gergely Nagy and Charley Wu in Tübingen, applied concepts from machine learning to better understand how different human memory systems interact. Instead of simply cataloguing memory errors, their goal was to uncover the logic behind them—specifically how they relate to learning and data compression strategies used by the brain.
‘Information theory helps us understand what’s worth remembering and what’s better forgotten,’ the researchers explained. Traditional information theory might suggest that very rare events aren’t useful to remember—but human memory doesn’t behave this way. On the contrary, people tend to retain surprising experiences more vividly.
The authors conclude that these standout moments play a crucial role in updating what we know. While routine memories help us predict future outcomes, surprising events act as catalysts that refresh our knowledge and adjust our expectations.
In practical terms, the findings also offer valuable insight into how we learn—or teach—most effectively. The researchers argue that machine learning models don’t just help us understand what we’ll remember or forget, but also guide us in optimizing when to repeat a concept and when it’s time to move on to something new.
Related articles:
Hungarian researchers have used AI-inspired mathematical models to explore how human memory works. Their study shows that surprising experiences play a uniquely important role in learning, challenging older theories about what the brain should remember.
Ádám Bráder graduated from the Faculty of Humanities of Eötvös Loránd University in 2021 as an English major specializing in English in the Media and Applied Linguistics. From 2017, he worked as an assistant editor at TV2’s news programme. After graduating, he continued his work as an online journalist, which led to him joining the Hungarian Conservative team in 2022.
We use cookies to enhance your browsing experience and to personalize the content and advertisements that you see on our website.