Connect with us

AI Research

Fostering Effective Policy for a Brave New AI World: A Conversation with Rishi Bommasani

Published

on


In the nine years since Rishi Bommasani began his academic career, the field of AI has evolved rapidly. His research has subsequently expanded, from a technical focus on building AI to exploring how to manage its societal and economic implications.

Now a senior research scholar at the Stanford Institute for Human-Centered AI (HAI), Bommasani recently authored a paper published in Science, joining 19 other scholars in setting out a vision for evidence-based AI policy.

In the following conversation, Bommasani describes where he believes he’s made the greatest impact so far, his predictions for AI policy, and the questions he still wants to address in AI governance.

How has your research path shifted along with the evolution of AI?

Today, AI is widely deployed and the transition from research to deployment is much shorter than in most other fields. Since starting my PhD in 2020, my work transitioned from how to build and evaluate AI to how to govern it. It has become hard for academia to build cutting-edge models because of the capital required, but we need academic leadership nonetheless. It became more and more important to think about how we can bridge the divide between AI research and AI policy.

When you look back at what you’ve accomplished so far, what work do you think has been most impactful?

Overall, I think my work has shown that academia can pursue large-scale, cross-institutional, multi-disciplinary collaborations to have a more direct impact on public policy and AI governance. 

Two examples come to mind. The first is a paper I co-authored in 2021 where we coined the term “foundation models.” That framework endured through dramatic shifts in the field and became central to both the EU AI Act, the world’s first comprehensive AI legislation, and President Biden’s Executive Order on AI.

The second example is my work in helping close the AI research-policy gap. Our evidence-based paper in Science was a nice capstone to a collection of works around AI governance. I also played a more hands-on role in advising European and U.S. leaders directly. The European Commission appointed me as an independent chair to oversee implementation of the EU AI Act. And following Biden’s executive order, I helped lead consensus-building efforts on how to approach open-source large language models, which included facilitating a private workshop with the White House and National Telecommunications and Information Administration (NTIA).

The resulting paper, whose 26 authors spanned 17 organizations, along with the formal comment we led with colleagues at Princeton and a more policy-centric companion paper in Science, reflected the growing consensus on the idea of marginal risk. We urged policymakers to ask not just whether open models could be misused, but whether they introduced new or greater risks than existing technologies like search engines. That work informed the NTIA’s final report and shaped the U.S. approach to open models, which continues under the Trump administration’s AI Action Plan.

What other options have you explored for governing AI beyond public policy?

Public policy is just one approach to governing so many increasingly important AI companies. There are also business and market incentives that are interesting to explore as AI adoption increases. 

For example, in the U.S., we haven’t really regulated digital technology that much, whether that’s search engines or browsers or social media platforms or digital ads. People in tech are used to having fairly little government intervention. If you’re trying to change things, you might be better off if you can re-engineer business incentives without bringing in regulatory elements. Market-based approaches might be preferable to regulatory approaches because they are more nimble, if one believes governments are slow, and they are more durable, since they can endure administration changes.

Your most recent paper explores the need for “evidence-based AI policy.” Why is defining “evidence” in policy sometimes problematic?

We should use credible evidence in policymaking. But what evidence counts as credible? And how can we bring the right evidence to light?

In public health policy, evidence is typically observational. In economic policy, we permit more theoretical approaches like forecasting. We should create a standard for AI evidence that balances real-world data and theory to inform policy decisions.

Next, how can we sculpt incentives to generate more credible evidence faster? For example, policy could support third-party AI testing. In cybersecurity, companies sometimes give “white hat” hackers a legal safe harbor as they look for vulnerabilities with the intent of improving systems. But social media companies have retaliated against researchers trying to circumvent AI models’ controls to surface potential harms. What if we had safe harbors for good-faith third-party evaluation in the AI space?

What makes governing AI particularly difficult compared to crafting policies and guardrails for other technologies?

General-purpose technologies like AI, the internet, and electricity are very important to society; we all can feel that reality. Yet they are very tricky to understand, especially in real time, let alone govern effectively. What is clear is that these technologies don’t just create a new technological niche with some companies building the tech and some consumers: They entirely alter how society as a whole operates. Our society pre- and post-internet is very different.

AI is implicated in an incredibly broad portfolio of risks like bias, self-harm, privacy violations, copyright violations, child sexual abuse material, cyberattacks, power concentration, geopolitical tension, economic disruption, and so on. While every technology we’ve built in history has some risks, this particular mixture of risks is unique.

And so, how to make progress on safety and security is complicated. For example, I can tell you that we are incrementally improving safety in self-driving cars, but I have no idea if we’re making language models safer. Some of the questions about AI also have older counterparts that we still haven’t solved. It’s not like we have fully solved the privacy problems of the internet, or racial bias in hiring, for example, and we have new compounding problems of privacy and bias with AI. 

How do you feel about the status of AI policy so far?

Right now, most AI policy ideas are highly speculative. Few ideas have been implemented, and we have very little clear signal on whether policy succeeds in producing better outcomes.

But I see two good trends happening. One is that globally, we have governmental institutions that think about AI, such as the Center for AI Standards and Innovation in the U.S. We have an actual collection of people in our government, some of whom come from technical backgrounds, who are committed to understanding the technology and its impact.

Second, there are a lot more nongovernmental and academic groups that study AI governance now, which is maturing the field. Whether we’ll collectively come up with the right ideas and implement them is uncertain, but having more people thinking about this seriously is progress.

What’s been the value of interdisciplinary centers like Stanford HAI in your work?

Almost always, interdisciplinary work is not going to be immediately appreciated by the underlying scholarly communities. But Stanford scholars generally think it’s worth reaching outside of your discipline to pursue large-scale societal impact. 

HAI and similar groups around campus facilitate this kind of interaction. That 2021 paper that coined the “foundation models” term is a great example of this. That project brought together more than 100 scholars from across 10 different departments at Stanford and introduced me to legal scholars, economists, and political scientists I still collaborate with today. 

It’s clear that AI is going to intersect with all parts of society and bring up fundamental questions. Will we, as researchers, choose to keep pace with all of the ways AI is interacting with the world? Will we build bridges and work on these problems together?



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

Gachon University establishes AI·Computing Research Institute – 조선일보

Published

on



Gachon University establishes AI·Computing Research Institute  조선일보



Source link

Continue Reading

AI Research

Tech war: Tencent pushes adoption of Chinese AI chips as mainland cuts reliance on Nvidia

Published

on

By


The Shenzhen-based tech conglomerate’s cloud computing unit, Tencent Cloud, said it was supporting “mainstream domestic chips” in its AI computing infrastructure, without naming any Chinese integrated circuit brand.

Tencent has “fully adapted to mainstream domestic chips” and “participates in the open-source community”, Tencent Cloud president Qiu Yuepeng said at the company’s annual Global Digital Ecosystem Summit on Tuesday.

It is a commitment that reflects growing efforts in the country’s semiconductor industry and AI sector to push forward Beijing’s tech self-sufficiency agenda amid US export restrictions on China and rising geopolitical tensions.
Tencent Cloud unveils support for Chinese-designed AI chips at the company’s annual Global Digital Ecosystem Summit. Photo: Weibo



Source link

Continue Reading

AI Research

Using AI for homework and social media bans in BBC survey results – BBC

Published

on



Using AI for homework and social media bans in BBC survey results  BBC



Source link

Continue Reading

Trending