Connect with us

AI Insights

Compiling the Future of U.S. Artificial Intelligence Regulation

Published

on


Experts examine the benefits and pitfalls of AI regulation.

Recently, the U.S. House of Representatives, voting along party lines, passed H.R. 1—colloquially known as the “One Big Beautiful Bill Act.” If enacted, H.R.1 would pause any state or local regulations affecting artificial intelligence (AI) models or research for ten years.

Over the past several years, AI tools—from chatbots like ChatGPT and DeepSeek to sophisticated video-generating software such as Alphabet Inc.’s Veo 3—have gained widespread consumer acceptance. Approximately 40 percent of Americans use AI tools daily. These tools continue to improve rapidly, becoming more usable and useful for average consumers and corporate users alike.

Optimistic projections suggest that the continued adoption of AI could lead to trillions of dollars of economic growth. Unlocking the benefits of AI, however, undoubtedly requires meaningful social and economic adjustments in the face of new employment, cyber-security and information-consumption patterns. Experts estimate that widespread AI implementation could displace or transform approximately 40 percent of existing jobs. Some analysts warn that without robust safety nets or reskilling programs, this displacement could exacerbate existing inequalities, particularly for low-income workers and communities of color and between more and less developed nations.

Given the potential for dramatic and widespread economic displacement, national and state governments, human rights watchdog groups, and labor unions increasingly support greater regulatory oversight of the emerging AI sector.

The data center infrastructure required to support current AI tools already consumes as much electricity as the eleventh-largest national market—rivaling that of  France. Continued growth in the AI sector necessitates ever-greater electricity generation and storage capacity, creating significant potential for environmental impact. In addition to electricity use, AI development consumes large amounts of water for cooling, raising further sustainability concerns in water-scarce regions.

Industry insiders and critics alike note that overly broad training parameters and flawed or unrepresentative data can lead models to embed harmful stereotypes and mimic human biases. These biases lead critics to call for strict regulation of AI implementation in policing, national security, and other policy contexts.

Polling shows that American voters desire more regulation of AI companies, including limiting the training data AI models can employ, imposing environmental-impact taxes on AI companies, and outright banning AI implementation in some sectors of the economy.

Nonetheless, there is little consensus among academics, industry insiders, and legislators as to whether—much less how—the emerging AI sector should be regulated.

In this week’s Saturday Seminar, scholars discuss the need for AI regulation and the benefits and drawbacks of centralized federal oversight.

  • In an article in the Stanford Emerging Technology Review 2025, Fei-Fei Li, Christopher Manning, and Anka Reuel of Stanford University, argue that federal regulation of AI may undermine U.S. leadership in the field by locking in rigid rules before key technologies have matured. Li, Manning, and Reuel caution that centralized regulation, especially of general-purpose AI models, risks discouraging competition, entrenching dominant firms, and shutting out third-party researchers. Instead, Li, Manning, and Reuel call for flexible regulatory models drawing on existing sectoral rules and voluntary governance to address use-specific risks. Such an approach, Li, Manning, and Reuel suggest, would better preserve the benefits of regulatory flexibility while maintaining targeted oversight of areas of greatest risk.
  • In a paper in the Common Market Law Review, Philipp Hacker, a professor at the European University Viadrina, argues that AI regulation must weigh the significant climate impacts of machine learning technologies. Hacker highlights the substantial energy and water consumption needed to train large generative models such as GPT-4. Critiquing current European Union regulatory frameworks, including the General Data Protection Regulation and the then-proposed EU AI Act, Hacker urges policy reforms that move beyond transparency toward incorporating sustainability in design and consumption caps tied to emissions trading schemes. Finally, Hacker proposes these sustainable AI regulatory strategies as a broader blueprint for the environmentally conscious development of emerging technologies, such as blockchain and the Metaverse.
  • The Cato Institute’s, David Inserra, warns that government-led efforts to regulate AI could undermine free expression. In a recent briefing paper, Inserra explains that regulatory schemes often target content labeled as misinformation or hate speech—efforts that can lead to AI systems reflecting narrow ideological norms. Inserra cautions that such rules may entrench dominant companies and crowd out AI products designed to reflect a wider range of views. Inserra calls for a flexible approach grounded in soft law, such as voluntary codes of conduct and third-party standards, to allow for the development of AI tools that support diverse expressions of speech.
  • In an article in the North Carolina Law Review, Erwin Chemerinsky, the Dean of UC Berkeley Law, and practitioner Alex Chemerinsky argue that state regulation of a closely related field—internet content moderation more broadly—is constitutionally problematic and bad policy. Drawing on precedents including Miami Herald v. Tornillo and Hurley v. Irish-American Gay Group, Chemerinsky and Chemerinsky contend that many state laws restricting or requiring content moderation violate First Amendment editorial discretion protections. Chemerinsky and Chemerinsky further argue that federal law preempts most state content moderation regulations. The Chemerinskys warn that allowing multiple state regulatory schemes would create a “lowest-common-denominator” problem where the most restrictive states effectively control nationwide internet speech, undermining the editorial rights of platforms and the free expression of their users.
  • In a forthcoming chapter, John Yun, of Antonin Scalia Law School at George Mason University, cautions against premature regulation of AI. Yun argues that overly restrictive AI regulations risk stifling innovation and could lead to long-term social costs outweighing any short-term benefits gained from mitigating immediate harms. Drawing parallels with the early days of internet regulation, Yun emphasizes that premature interventions could entrench market incumbents, limit competition, and crowd out potentially superior market-driven solutions to emerging risks. Instead, Yun advocates applying existing laws of general applicability to AI and maintaining a regulatory restraint similar to the approach adopted during the formative early years of the internet.
  • In a forthcoming article in the Journal of Learning Analytics, Rogers Kaliisa of the University of Oslo and several coauthors examine how the diversity of AI regulations across different countries creates an “uneven storm” for learning analytics research. Kaliisa and his coauthors analyze how comprehensive EU regulations such as their AI Act, U.S. sector-specific approaches, and China’s algorithm disclosure requirements impose different restrictions on the use of educational data in AI research. Kaliisa and his team warn that strict rules—particularly the EU’s ban on emotion recognition and biometric sensors—may limit innovative AI applications, widening global inequalities in educational AI development. The Kaliisa team proposes that experts engage with policymakers to develop frameworks that balance innovation with ethical safeguards across borders.

The Saturday Seminar is a weekly feature that aims to put into written form the kind of content that would be conveyed in a live seminar involving regulatory experts. Each week, The Regulatory Review publishes a brief overview of a selected regulatory topic and then distills recent research and scholarly writing on that topic.



Source link

AI Insights

Russia allegedly field-testing deadly next-gen AI drone powered by Nvidia Jetson Orin — Ukrainian military official says Shahed MS001 is a ‘digital predator’ that identifies targets on its own

Published

on


Ukrainian Major General Vladyslav (Владислав Клочков) Klochkov says Russia is field-testing a deadly new drone that can use AI and thermal vision to think on its own, identifying targets without coordinates and bypassing most air defense systems. According to the senior military figure, inside you will find the Nvidia Jetson Orin, which has enabled the MS001 to become “an autonomous combat platform that sees, analyzes, decides, and strikes without external commands.”

Digital predator dynamically weighs targets

With the Jetson Orin as its brain, the upgraded MS001 drone doesn’t just follow prescribed coordinates, like some hyper-accurate doodle bug. It actually thinks. “It identifies targets, selects the highest-value one, adjusts its trajectory, and adapts to changes — even in the face of GPS jamming or target maneuvers,” says Klochkov. “This is not a loitering munition. It is a digital predator.”



Source link

Continue Reading

AI Insights

Artificial Intelligence Predicts the Packers’ 2025 Season!!!

Published

on


On today’s show, Andy simulates the Packers 2025 season utilizing artificial intelligence. Find out the results on today’s all-new Pack-A-Day Podcast! #Packers #GreenBayPackers #ai To become a member of the Pack-A-Day Podcast, click here: https://www.youtube.com/channel/UCSGx5Pq0zA_7O726M3JEptA/join Don’t forget to subscribe!!! Twitter/BlueSky: @andyhermannfl If you’d like to support my channel, please donate to: PayPal: https://paypal.me/andyhermannfl Venmo: @Andrew_Herman Email: [email protected] Discord: https://t.co/iVVltoB2Hg





Source link

Continue Reading

AI Insights

WHO Director-General’s remarks at the XVII BRICS Leaders’ Summit, session on Strengthening Multilateralism, Economic-Financial Affairs, and Artificial Intelligence – 6 July 2025

Published

on


Your Excellency President Lula da Silva,

Excellencies, Heads of State, Heads of Government,

Heads of delegation,

Dear colleagues and friends,

Thank you, President Lula, and Brazil’s BRICS Presidency for your commitment to equity, solidarity, and multilateralism.

My intervention will focus on three key issues: challenges to multilateralism, cuts to Official Development Assistance, and the role of AI and other digital tools.

First, we are facing significant challenges to multilateralism.

However, there was good news at the World Health Assembly in May.

WHO’s Member States demonstrated their commitment to international solidarity through the adoption of the Pandemic Agreement. South Africa co-chaired the negotiations, and I would like to thank South Africa.

It is time to finalize the next steps.

We ask the BRICS to complete the annex on Pathogen Access and Benefit Sharing so that the Agreement is ready for ratification at next year’s World Health Assembly. Brazil is co-chairing the committee, and I thank Brazil for their leadership.

Second, are cuts to Official Development Assistance.

Compounding the chronic domestic underinvestment and aid dependency in developing countries, drastic cuts to foreign aid have disrupted health services, costing lives and pushing millions into poverty.

The recent Financing for Development conference in Sevilla made progress in key areas, particularly in addressing the debt trap that prevents vital investments in health and education.

Going forward, it is critical for countries to mobilize domestic resources and foster self-reliance to support primary healthcare as the foundation of universal health coverage.

Because health is not a cost to contain, it’s an investment in people and prosperity.

Third, is AI and other digital tools.

Planning for the future of health requires us to embrace a digital future, including the use of artificial intelligence. The future of health is digital.

AI has the potential to predict disease outbreaks, improve diagnosis, expand access, and enable local production.

AI can serve as a powerful tool for equity.

However, it is crucial to ensure that AI is used safely, ethically, and equitably.

We encourage governments, especially BRICS, to invest in AI and digital health, including governance and national digital public infrastructure, to modernize health systems while addressing ethical, safety, and equity issues.

WHO will be by your side every step of the way, providing guidance, norms, and standards.

Excellencies, only by working together through multilateralism can we build a healthier, safer, and fairer world for all.

Thank you. Obrigado.



Source link

Continue Reading

Trending