Connect with us

AI Research

How the striking of a GOP regulatory ban will affect the global artificial intelligence race

Published

on


The House just voted to pass the Senate version of the one big beautiful bill without a moratorium that would prevent states from enforcing regulations or introducing new laws on AI systems for 10 years.
Image: Crglenn, CC BY-SA 4.0 , via Wikimedia Commons

When the members of the US House of Representatives voted last month for President Trump’s budget reconciliation legislation, aka the one big beautiful bill, many were not aware that the nearly 900-page document contained a 450-word section with an unusual provision: a moratorium that would prevent states from enforcing regulations or introducing new laws on AI systems for 10 years.

But once it was noticed, the outcry was swift. “A lot of people were just caught by surprise,” said Democratic Sen. Maria Cantwell of Washington, ranking member of the Senate Committee on Commerce, Science, and Transportation. “When I voted for the one big beautiful bill, I didn’t know about this clause,” Georgia representative Marjorie Taylor Greene lamented.

The surprise moratorium sparked strong opposition, uniting an unlikely group across the political spectrum—from Republican and Democrat lawmakers to child safety advocates and civil rights groups, and … right-wing firebrand commentator Steve Bannon. Many saw the moratorium, pushed by some of the top AI developers, as a trojan horse that would violate states’ rights and prevents them from addressing the existing and future dangers of artificial intelligence.

In the absence of overarching federal regulation on AI and other emerging information technologies, some members of Congress and several states have stepped in, trying to fill the gaps with laws to protect against specific harms like AI-generated pornography, deep-fakes designed to mislead voters and consumers, spam phone calls, and housing rents set by algorithms. Examples include the Take it Down Act (sponsored by Republican Sen. Ted Cruz of Texas and prohibiting “the nonconsensual online publication of intimate visual depictions of individuals”), the Kids Online Safety Act (sponsored by Democratic Sen. Richard Blumenthal of New York and aimed at protecting “the safety of children on the internet”), and the Elvis Act, a Tennessee law that expands the state’s Protection of Personal Rights law “to include protections or songwriters, performers, and music industry professionals’ voice from the misuse of artificial intelligence (AI).”

State legislators and attorneys general of both parties said the moratorium would not only have decimated the progress states have made but would leave them unable to protect their constituents over the next decade, when “AI will raise some of the most important public policy questions of our time,” 260 state legislators wrote in an open letter. Similarly, 40 state attorneys general wrote that “[t]he impact of such a broad moratorium would be sweeping and wholly destructive of reasonable state efforts to prevent known harms associated with AI.”

Objections intensified as the Senate struggled to pass a reconciliation bill this week, and on Tuesday, it struck down the moratorium by a 99 to one vote. The House just voted to pass the moratorium-free Senate version of the one big beautiful bill.

For the proponents of the moratorium, primarily Big Tech companies and foreign policy think tanks, there’s a justification that warrants taking such an extreme and evidently unpopular action. A patchwork of state laws on AI is, they say, costly and time-consuming to navigate, resulting in an unnecessary burden that stifles innovation and slows progress. The risk of losing AI dominance to China is an even more dangerous threat—it is an “existential threat,” with implications for national security and even future of the world.

“While not someone I’d typically quote, Vladimir Putin has said that whoever prevails will determine the direction of the world going forward,” Chris Lehane, OpenAI’s chief global affairs officer wrote on LinkedIn last week.

Concerns around speed of progress and competition with China have now become the dominant view on policy and strategy in Big Tech, business groups, and venture capitalists.

But that view is also increasingly coming under scrutiny. Critics, for example, question the claim that China is steaming ahead unencumbered by regulations. In reality, they argue, the AI regulatory environment in China is much more restrictive than in the United States. The whole underlying premise itself has also come under question: Are the world’s biggest powers really locked in an AI race aiming for global dominance? Does the race have to be a zero-sum game? Is this narrative even real, or dangerous fiction?

How do regulatory environments in the United States and China actually compare? In his post, Lehane of OpenAI also wrote: “America’s top AI competitor, the PRC [Peoples Republic of China], is moving full-steam ahead with few if any restrictions and significant government backing.”

It’s a bold claim. But not everyone is buying it.

“It’s a complete myth,” says Gary Marcus, AI scholar and professor emeritus at NYU. “China has more regulation than we do. The reality is completely different from the talking points that you’ll hear from certain venture capitalists and so on.”

China began developing AI regulations in 2023. These administrative regulations cover many areas of emerging information technology, including generative AI, deepfakes and synthetic media, recommendation algorithms, and curating algorithms on search engines and social media. Moreover, in China, firms that develop and train AI models must submit technical details of the model to a central registry.

“I will tell you, from having close contact with compliance lawyers in big Chinese tech firms, that their regulatory environment is very intensive,” says Gilad Abiri, assistant professor of law at Peking University School of Transnational Law and Affiliate Fellow at the Information Society Project at Yale Law School.

In fact, Chinese AI developers are responsible for a lot of compliance work. “In that regard, it’s already now a very different world the way they operate,” Abiri says. “The US would have to do very dramatic regulations, including passing data privacy law, to get anywhere near what Chinese firms have to comply with.” Public rollout of various large language models have been delayed in China, pending regulatory approvals. Ernie Bot, an AI chatbot developed by the Chinese tech company Baidu, Inc. saw a 6-month delay.

The Chinese are also aware of the need to stay competitive, and they modify regulations to keep from crippling their companies in the global competition, Abiri says. For example, initially China decided against making generative-AI chatbots available to the public. But eventually it changed course. Now chatbots such as DeepSeek’s, which works like OpenAI’s ChatGPT, exist. Although DeepSeek’s models are open source, they can be run locally on the user’s computer as opposed to cloud-based servers, and are available to anyone to use freely to develop new applications.

American tech developers, however, have argued that any delay caused by doing compliance work could lead to the United States lagging China in AI developments. But critics ask just how much that actually matters. “Lag only matters if there is a competition, and there is a competition that can be won decisively, and that competitive advantage can be kept in some way,” Abiri says.

Under current circumstances, where large language models are a commodity and everyone is playing by the same playbook, it is not clear how anyone can get anything beyond a very short-lived advantage, Marcus says: “This model is better than that one for two weeks, or this other one’s better for four.”

An imaginary race? The AI race narrative emphasizes that countries must engage in zero-sum thinking to control the future by out-competing other countries. But Tiffany Li, associate professor of law at the University of San Francisco School of Law, argues that this narrative is not accurate—its underlying assumptions are baseless, as it is yet unknown what society may gain from AI: “The future of artificial intelligence is not a zero-sum game—or, at least, it does not have to be.”

The evidence for the arms-race AI narrative is weak, and it is being mostly promoted in the West, often by actors who stand to benefit directly from it, according to Seán S. ÓhÉigeartaigh, program director at the Centre for the Future of Intelligence at the University of Cambridge, who’s been studying Chinese AI governance since 2017. “By framing AI as a winner-takes-all technology, proponents create a powerful imperative for action while limiting critical examination of the overall premise,” ÓhÉigeartaigh says. The framing of AI development as a high-stakes race and an existential security threat to a given nation allows justifying extraordinary measures that bypass safety regulations and oversight and reduce legal costs for Big Tech.

“All the corporations wanted this so that they could save money, but the fact is the federal government has not made a coherent AI policy, and the states are trying to protect their citizens, and they should be able to do that,” Marcus says. “The fact that [the Senate] struck this crazy provision is a huge victory for humanity over corporate lobbying.”

The vacuum of federal regulation. The proposed moratorium on state-level regulation of AI was, in a way, a non-act. Even those who don’t believe state level regulation is always suitable for borderless technologies like AI still expect that the United States will put a national regulatory system in place.

“I’m optimistic about the potential for AI, and I take a back seat to no one about believing that we are, in fact, in a global race, that there are important national security implications,” Satya Thallam, senior advisor at Americans for Responsible Innovation, in Washington DC, said in a virtual roundtable. This moratorium proposal however was not replacing what the states are doing with a uniform national framework; it was replacing state-level AI regulations with nothing, he said.

Others agree. “There’s a difference between writing a federal set of guidelines around AI that would preempt states, and what this is, which is a federal moratorium on any state legislation without replacing it with anything,” Adam Kovacevich, CEO of the tech policy group Chamber of Progress and former Google US policy chief, told ABC News.

“To take the step to say we are not doing anything, and we’re going to prevent the states from doing anything is, as far as I know, unprecedented,” Larry Norden, the vice president of the Elections and Government Program at the Brennan Center in New York told NBC News. “Given the stakes with this technology, it’s really dangerous,”

Some observers are optimistic that Americans will not find themselves in a completely unregulated environment in the future, given that people already understand the high costs of not regulating social media. Also, many actors outside the United States want to regulate AI, two of which are digital empires—China and the European Union. “The bottom line,” Abiri says, “is that this is not a competition that can be won. It’s a competition that everyone can lose, fundamentally so, if we do not regulate this.”

In fact, the odds of a federal-level regulatory framework happening someday may be higher now that this moratorium failed and the “patchwork” problem remains, according to Gregory C. Allen, senior adviser with the Wadhwani AI Center. Then, “the government can say with a straight face, we don’t need state action, because here is the federal action.”

Editor’s note: This piece was produced with support from the Future of Life Institute.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

Indonesian volcano Mount Lewotobi Laki-laki spews massive ash cloud as it erupts again

Published

on


Indonesia’s Mount Lewotobi Laki-laki has begun erupting again – at one point shooting an ash cloud 18km (11mi) into the sky – as residents flee their homes once more.

There have been no reports of casualties since Monday morning, when the volcano on the island of Flores began spewing ash and lava again. Authorities have placed it on the highest alert level since an earlier round of eruptions three weeks ago.

At least 24 flights to and from the neighbouring resort island of Bali were cancelled on Monday, though some flights had resumed by Tuesday morning.

The initial column of hot clouds that rose at 11:05 (03:05 GMT) Monday was the volcano’s highest since November, said geology agency chief Muhammad Wafid.

“An eruption of that size certainly carries a higher potential for danger, including its impact on aviation,” Wafid told The Associated Press.

Monday’s eruption, which was accompanied by a thunderous roar, led authorities to enlarge the exclusion zone to a 7km radius from the central vent. They also warned of potential lahar floods – a type of mud or debris flow of volcanic materials – if heavy rain occurs.

The twin-peaked volcano erupted again at 19:30 on Monday, sending ash clouds and lava up to 13km into the air. It erupted a third time at 05:53 on Tuesday at a reduced intensity.

Videos shared overnight show glowing red lava spurting from the volcano’s peaks as residents get into cars and buses to flee.

More than 4,000 people have been evacuated from the area so far, according to the local disaster management agency.

Residents who have stayed put are facing a shortage of water, food and masks, local authorities say.

“As the eruption continues, with several secondary explosions and ash clouds drifting westward and northward, the affected communities who have not been relocated… require focused emergency response efforts,” say Paulus Sony Sang Tukan, who leads the Pululera village, about 8km from Lewotobi Laki-laki.

“Water is still available, but there’s concern about its cleanliness and whether it has been contaminated, since our entire area was blanketed in thick volcanic ash during yesterday’s [eruptions],” he said.

Indonesia sits on the Pacific “Ring of Fire” where tectonic plates collide, causing frequent volcanic activity as well as earthquakes.

Lewotobi Laki-laki has erupted multiple times this year – no casualties have been reported so far.

However, an eruption last November killed at least ten people and forced thousands to flee.

Laki-Laki, which means “man” in Indonesian, is twinned with the calmer but taller 1,703m named Perempuan, the Indonesian word for “woman”.

Additional reporting by Eliazar Ballo in Kupang.



Source link

Continue Reading

AI Research

What makes a good AI prompt? Here are 4 expert tips

Published

on


“And do you work well with AI?”

As tools such as ChatGPT, Copilot and other generative artificial intelligence (AI) systems become part of everyday workflows, more companies are looking for employees who can answer “yes” to this question. In other words, people who can prompt effectively, think with AI, and use it to boost productivity.

In fact, in a growing number of roles, being “AI fluent” is quickly becoming as important as being proficient in office software once was.

But we’ve all had that moment when we’ve asked an AI chatbot a question and received what feels like the most generic, surface level answer. The problem isn’t the AI – you just haven’t given it enough to work with.

Think of it this way. During training, the AI will have “read” virtually everything on the internet. But because it makes predictions, it will give you the most probable, most common response. Without specific guidance, it’s like walking into a restaurant and asking for something good. You’ll likely get the chicken.

Your solution lies in understanding that AI systems excel at adapting to context, but you have to provide it. So how exactly do you do that?

Crafting better prompts

You may have heard the term “prompt engineering”. It might sound like you need to design some kind of technical script to get results.

But today’s chatbots are great at human conversation. The format of your prompt is not that important. The content is.

To get the most out of your AI conversations, it’s important that you convey a few basics about what you want, and how you want it. Our approach follows the acronym CATS – context, angle, task and style.

Context means providing the setting and background information the AI needs. Instead of asking “How do I write a proposal?” try “I’m a nonprofit director writing a grant proposal to a foundation that funds environmental education programs for urban schools”. Upload relevant documents, explain your constraints, and describe your specific situation.

Angle (or attitude) leverages AI’s strength in role-playing and perspective-taking. Rather than getting a neutral response, specify the attitude you want. For example, “Act as a critical peer reviewer and identify weaknesses in my argument” or “Take the perspective of a supportive mentor helping me improve this draft”.

Task is specifically about what you actually want the AI to do. “Help me with my presentation” is vague. But “Give me three ways to make my opening slide more engaging for an audience of small business owners” is actionable.

Style harnesses AI’s ability to adapt to different formats and audiences. Specify whether you want a formal report, a casual email, bullet points for executives, or an explanation suitable for teenagers. Tell the AI what voice you want to use – for example, a formal academic style, technical, engaging or conversational.

In a growing number of roles, being able to use AI is quickly becoming as important as being proficient in office software once was.
Shutterstock

Context is everything

Besides crafting a clear, effective prompt, you can also focus on managing the surrounding information – that is to say on “context engineering”. Context engineering refers to everything that surrounds the prompt.

That means thinking about the environment and information the AI has access to: its memory function, instructions leading up to the task, prior conversation history, documents you upload, or examples of what good output looks like.

You should think about prompting as a conversation. If you’re not happy with the first response, push for more, ask for changes, or provide more clarifying information.

Don’t expect the AI to give a ready-made response. Instead, use it to trigger your own thinking. If you feel the AI has produced a lot of good material but you get stuck, copy the best parts into a fresh session and ask it to summarise and continue from there.

Keeping your wits

A word of caution though. Don’t get seduced by the human-like conversation abilities of these chatbots.

Always retain your professional distance and remind yourself that you are the only thinking part in this relationship. And always make sure to check the accuracy of anything an AI produces – errors are increasingly common.

AI systems are remarkably capable, but they need you – and human intelligence – to bridge the gap between their vast generic knowledge and your particular situation. Give them enough context to work with, and they might surprise you with how helpful they can be.



Source link

Continue Reading

AI Research

ASML finds even monopolists get the blues

Published

on


Unlock the Editor’s Digest for free

Holding a virtual monopoly in a product on which the artificial intelligence boom relies should be a golden ticket. For chipmaker Nvidia, it has been. But ASML, which makes extraordinarily complex machines that etch silicon and is no less integral to the rise of AI, has found that ruling the roost can still be an up-and-down affair.

The €270bn Dutch manufacturer, which reports its earnings next week, is a sine qua non of technology; chips powering AI and even fridges are invariably etched by ASML’s kit. The flipside is its exposure to customers’ fortunes and politics.

Revenue is inherently lumpy, and a single paused purchase makes a big dent — a key difference from fellow AI monopolist Nvidia, which is at present struggling to meet demand for its top-end chips. ASML’s newest high numerical aperture (NA) systems go for €380mn; as an example of how volatile revenue can be for such big-ticket items, one delayed order would be akin to drivers holding off on buying 8,000-odd Teslas.

Initial hopes were high for robust spending on wafer fab equipment this year and next. Semi, an industry body, in December reckoned on an increase of 7 per cent this year and twice that in 2026. Jefferies, for example, now expects sales to flatline next year.

Mood music bears that out. Top chipmaker TSMC has sounded more cautious over the timing of the adoption of new high NA machines. Other big customers are reining in spending. Intel in April shaved its capital expenditure plans by $2bn to $18bn, while consensus numbers for Samsung Electronics suggest the South Korean chipmaker will underspend last year’s $39bn capex budget.

Politics is also getting thornier. Washington, seeking to hobble China’s tech prowess, has banned sales of ASML’s more advanced machines. Going further would hurt. China, which buys the less advanced but more profitable deep ultraviolet machines, typically accounts for about a quarter of sales. Last year, catch-up on orders lifted that to half.

Meanwhile, Chinese homegrown competition, given an extra nudge by US trade barriers, is evolving. Shenzhen government-backed SiCarrier, for example, claims to have encroached on ASML territory with lithography capable of producing less advanced chips.

The good news is that catch-up in this industry, with a 5,000-strong supplier base and armies of engineers, requires years if not decades. Customers, too, will probably be deferring rather than nixing purchases. The zippier machines help customers juice yields; Intel reckons it cuts processes on a given layer from 40 steps to just 10.

Over time, ASML’s enviable market position looks solid — and perhaps more so than that of Nvidia, whose customers are increasingly trying to create their own chips. Yet the kit-maker’s shares have been the rockier investment. In the past year, ASML has shrunk by a third while Nvidia has risen by a quarter; its market capitalisation is within a whisker of $4tn. That makes ASML the braver bet, but by no means a worse one.

louise.lucas@ft.com



Source link

Continue Reading

Trending