Connect with us

AI Insights

Compiling the Future of U.S. Artificial Intelligence Regulation

Published

on


Experts examine the benefits and pitfalls of AI regulation.

Recently, the U.S. House of Representatives, voting along party lines, passed H.R. 1—colloquially known as the “One Big Beautiful Bill Act.” If enacted, H.R.1 would pause any state or local regulations affecting artificial intelligence (AI) models or research for ten years.

Over the past several years, AI tools—from chatbots like ChatGPT and DeepSeek to sophisticated video-generating software such as Alphabet Inc.’s Veo 3—have gained widespread consumer acceptance. Approximately 40 percent of Americans use AI tools daily. These tools continue to improve rapidly, becoming more usable and useful for average consumers and corporate users alike.

Optimistic projections suggest that the continued adoption of AI could lead to trillions of dollars of economic growth. Unlocking the benefits of AI, however, undoubtedly requires meaningful social and economic adjustments in the face of new employment, cyber-security and information-consumption patterns. Experts estimate that widespread AI implementation could displace or transform approximately 40 percent of existing jobs. Some analysts warn that without robust safety nets or reskilling programs, this displacement could exacerbate existing inequalities, particularly for low-income workers and communities of color and between more and less developed nations.

Given the potential for dramatic and widespread economic displacement, national and state governments, human rights watchdog groups, and labor unions increasingly support greater regulatory oversight of the emerging AI sector.

The data center infrastructure required to support current AI tools already consumes as much electricity as the eleventh-largest national market—rivaling that of  France. Continued growth in the AI sector necessitates ever-greater electricity generation and storage capacity, creating significant potential for environmental impact. In addition to electricity use, AI development consumes large amounts of water for cooling, raising further sustainability concerns in water-scarce regions.

Industry insiders and critics alike note that overly broad training parameters and flawed or unrepresentative data can lead models to embed harmful stereotypes and mimic human biases. These biases lead critics to call for strict regulation of AI implementation in policing, national security, and other policy contexts.

Polling shows that American voters desire more regulation of AI companies, including limiting the training data AI models can employ, imposing environmental-impact taxes on AI companies, and outright banning AI implementation in some sectors of the economy.

Nonetheless, there is little consensus among academics, industry insiders, and legislators as to whether—much less how—the emerging AI sector should be regulated.

In this week’s Saturday Seminar, scholars discuss the need for AI regulation and the benefits and drawbacks of centralized federal oversight.

  • In an article in the Stanford Emerging Technology Review 2025, Fei-Fei Li, Christopher Manning, and Anka Reuel of Stanford University, argue that federal regulation of AI may undermine U.S. leadership in the field by locking in rigid rules before key technologies have matured. Li, Manning, and Reuel caution that centralized regulation, especially of general-purpose AI models, risks discouraging competition, entrenching dominant firms, and shutting out third-party researchers. Instead, Li, Manning, and Reuel call for flexible regulatory models drawing on existing sectoral rules and voluntary governance to address use-specific risks. Such an approach, Li, Manning, and Reuel suggest, would better preserve the benefits of regulatory flexibility while maintaining targeted oversight of areas of greatest risk.
  • In a paper in the Common Market Law Review, Philipp Hacker, a professor at the European University Viadrina, argues that AI regulation must weigh the significant climate impacts of machine learning technologies. Hacker highlights the substantial energy and water consumption needed to train large generative models such as GPT-4. Critiquing current European Union regulatory frameworks, including the General Data Protection Regulation and the then-proposed EU AI Act, Hacker urges policy reforms that move beyond transparency toward incorporating sustainability in design and consumption caps tied to emissions trading schemes. Finally, Hacker proposes these sustainable AI regulatory strategies as a broader blueprint for the environmentally conscious development of emerging technologies, such as blockchain and the Metaverse.
  • The Cato Institute’s, David Inserra, warns that government-led efforts to regulate AI could undermine free expression. In a recent briefing paper, Inserra explains that regulatory schemes often target content labeled as misinformation or hate speech—efforts that can lead to AI systems reflecting narrow ideological norms. Inserra cautions that such rules may entrench dominant companies and crowd out AI products designed to reflect a wider range of views. Inserra calls for a flexible approach grounded in soft law, such as voluntary codes of conduct and third-party standards, to allow for the development of AI tools that support diverse expressions of speech.
  • In an article in the North Carolina Law Review, Erwin Chemerinsky, the Dean of UC Berkeley Law, and practitioner Alex Chemerinsky argue that state regulation of a closely related field—internet content moderation more broadly—is constitutionally problematic and bad policy. Drawing on precedents including Miami Herald v. Tornillo and Hurley v. Irish-American Gay Group, Chemerinsky and Chemerinsky contend that many state laws restricting or requiring content moderation violate First Amendment editorial discretion protections. Chemerinsky and Chemerinsky further argue that federal law preempts most state content moderation regulations. The Chemerinskys warn that allowing multiple state regulatory schemes would create a “lowest-common-denominator” problem where the most restrictive states effectively control nationwide internet speech, undermining the editorial rights of platforms and the free expression of their users.
  • In a forthcoming chapter, John Yun, of Antonin Scalia Law School at George Mason University, cautions against premature regulation of AI. Yun argues that overly restrictive AI regulations risk stifling innovation and could lead to long-term social costs outweighing any short-term benefits gained from mitigating immediate harms. Drawing parallels with the early days of internet regulation, Yun emphasizes that premature interventions could entrench market incumbents, limit competition, and crowd out potentially superior market-driven solutions to emerging risks. Instead, Yun advocates applying existing laws of general applicability to AI and maintaining a regulatory restraint similar to the approach adopted during the formative early years of the internet.
  • In a forthcoming article in the Journal of Learning Analytics, Rogers Kaliisa of the University of Oslo and several coauthors examine how the diversity of AI regulations across different countries creates an “uneven storm” for learning analytics research. Kaliisa and his coauthors analyze how comprehensive EU regulations such as their AI Act, U.S. sector-specific approaches, and China’s algorithm disclosure requirements impose different restrictions on the use of educational data in AI research. Kaliisa and his team warn that strict rules—particularly the EU’s ban on emotion recognition and biometric sensors—may limit innovative AI applications, widening global inequalities in educational AI development. The Kaliisa team proposes that experts engage with policymakers to develop frameworks that balance innovation with ethical safeguards across borders.

The Saturday Seminar is a weekly feature that aims to put into written form the kind of content that would be conveyed in a live seminar involving regulatory experts. Each week, The Regulatory Review publishes a brief overview of a selected regulatory topic and then distills recent research and scholarly writing on that topic.



Source link

AI Insights

Pittsburgh’s AI summit: five key takeaways

Published

on


The push for artificial intelligence-related investments in Western Pennsylvania continued Thursday with a second conference that brought together business leaders and elected officials. 

Not in attendance this time was President Donald Trump, who headlined a July 15 celebration of AI opportunity at Carnegie Mellon University.

This time Gov. Josh Shapiro, U.S. Sen. David McCormick and others converged in Bakery Square in Larimer to emphasize emerging public-private initiatives in anticipation of growing data center development and other artificial intelligence-related infrastructure including power plants. 

Here’s what speakers and attendees at the summit were saying.

AI is not a fad

As regional leaders and business investors consider their options, BNY Mellon’s CEO Robin Vince cautioned against not taking AI seriously.

“The way to get left behind in the next 10 years is to not care about AI,” Vince said

“AI is transforming everything,” said Selin Song during Thursday’s event. As president of Google Customer Solutions, Song said that the company’s recent investment of $25 million across the Pennsylvania-Jersey-Maryland grid will help give AI training access to the more than 1 million small businesses in the state.

Google isn’t the only game in town 

Shapiro noted that Amazon recently announced plans to spend at least $20 billion to establish multiple high-tech cloud computing and AI innovation campuses across the state.

“This is a generational change,” Shapiro said, calling it the largest private sector investment in Pennsylvania’s history. “This is our next chapter in innovative growth. It all fits together. This new investment is beyond data center 1.0 that we saw in Virginia.”   

Fracking concerns elevated

With all of the plans for new power-hungry data centers, some are concerned that the AI push will create more environmental destruction. Outside the summit, Food & Water Watch Pennsylvania cautioned that the interest in AI development is a “Trojan horse” for more natural gas fracking. Amid President Donald Trump’s attempts to dismantle wind and solar power, alternatives to natural gas appear limited. 

People gather in the Bakery Square area of Larimer Thursday, Sept. 11, to protest the nearby AI Horizons Summit. (Photo by Eric Jankiewicz/Pittsburgh’s Public Source)

Nuclear ready for its moment

But one possible alternative was raised at the AI conference by Westinghouse Electric Company’s interim CEO Dan Summer.

The Pittsburgh-headquartered organization is leading a renewed interest in nuclear energy with plans to build a number of its AP 1000 reactors to help match energy needs and capabilities. 

Summer said that the company is partnering with Google, allowing them to leverage Google’s AI capabilities “with our nuclear operations to construct new nuclear here.” 

China vs. ‘heroes’

Underlying much of the AI activity: concerns with China’s work in this field

With its vast resources, enormous capital, energy, workforce, the Chinese government is leveraging its resources to beat the United States in AI development,” said Nazak Nikakhtar, a national security and international trade attorney who chaired one of the panels Thursday. 

Four men in business attire participate in a panel discussion on stage at the PA AI Horizons Conference, with one man speaking at a podium while the others listen.
Carnegie Mellon University President Farnam Jahanian, right, speaks at the AI Horizons Summit alongside Gov. Josh Shapiro, center, and other panelists. (Photo by Eric Jankiewicz/Pittsburgh’s Public Source)

Speaking to EQT’s CEO Toby Rice and Groq executive Ian Andrews, Nikakhtar outlined some of the challenges she saw in U.S. development of AI technology compared to China. 

We are attempting to leverage, now, our own resources, albeit in some respects much more limited vis-a-vis what China has, to accelerate AI leadership here in the United States and beat China,” she said. “But we’re somewhat constrained by the resources we have, by our population, by workforce, capital.”

Rice said in response that the natural resources his company is extracting will help power the country’s ability to compete with China. 

Rice drew a link between the 9/11 terror attacks 24 years earlier and the “urgency” of competing with China in AI.

“People are looking to take down American economies,” Rice said. “And we have heroes. Never forget. And I do believe that us winning this race against China in AI is going to be one of the most heroic things we’re going to do.”

Eric Jankiewicz is PublicSource’s economic development reporter and can be reached at ericj@publicsource.org or on Twitter @ericjankiewicz.

Your gift will keep stories like this coming.

Have you learned something new today? Consider supporting our work with a donation.

We take pride in serving our community by delivering accurate, timely, and impactful journalism without paywalls, but with rising costs for the resources needed to produce our top-quality journalism, every reader contribution matters. It takes a lot of resources to produce this work, from compensating our staff, to the technology that brings it to you, to fact-checking every line, and much more.

Your donation to our nonprofit newsroom helps ensure that everyone in Allegheny County can stay informed about the decisions and events that impact their lives. Thank you for your support!

Creative Commons License

Republish our articles for free, online or in print, under a Creative Commons license.





Source link

Continue Reading

AI Insights

Commanders vs. Packers props, SportsLine Machine Learning Model AI picks, bets: Jordan Love Over 223.5 yards

Published

on


The NFL Week 2 schedule gets underway with a Thursday Night Football matchup between NFC playoff teams from a year ago. The Washington Commanders battle the Green Bay Packers beginning at 8:15 p.m. ET from Lambeau Field. Second-year quarterback Jayden Daniels led the Commanders to a 21-6 opening-day win over the New York Giants, completing 19 of 30 passes for 233 yards and one touchdown. Jordan Love, meanwhile, helped propel the Packers to a dominating 27-13 win over the Detroit Lions in Week 1. He completed 16 of 22 passes for 188 yards and two touchdowns. 

NFL prop bettors will likely target the two young quarterbacks with NFL prop picks, in addition to proven playmakers like Deebo Samuel, Romeo Doubs and Zach Ertz. Green Bay’s Jayden Reed has been dealing with a foot injury, but still managed to haul in a touchdown pass in the opener, while Austin Ekeler (shoulder) does not carry an injury designation for TNF. The Packers enter as a 3-point favorite with Green Bay at -172 on the money line, while the over/under is 49 points. Before betting any Commanders vs. Packers props for Thursday Night Football, you need to see the Commanders vs. Packers prop predictions powered by SportsLine’s Machine Learning Model AI.

Built using cutting-edge artificial intelligence and machine learning techniques by SportsLine’s Data Science team, AI Predictions and AI Ratings are generated for each player prop. 

For Packers vs. Commanders NFL betting on Monday Night Football, the Machine Learning Model has evaluated the NFL player prop odds and provided Commanders vs. Packers prop picks. You can only see the Machine Learning Model player prop predictions for Washington vs. Green Bay here.

Top NFL player prop bets for Commanders vs. Packers

After analyzing the Commanders vs. Packers props and examining the dozens of NFL player prop markets, the SportsLine’s Machine Learning Model says Packers quarterback Love goes Over 223.5 passing yards (-112 at FanDuel). Love passed for 224 or more yards in eight games a year ago, despite an injury-filled season. In 15 regular-season games in 2024, he completed 63.1% of his passes for 3,389 yards and 25 touchdowns with 11 interceptions. Additionally, Washington allowed an average of 240.3 passing yards per game on the road last season.

In a 30-13 win over the Seattle Seahawks on Dec. 15, he completed 20 of 27 passes for 229 yards and two touchdowns. Love completed 21 of 28 passes for 274 yards and two scores in a 30-17 victory over the Miami Dolphins on Nov. 28. The model projects Love to pass for 259.5 yards, giving this prop bet a 4.5 rating out of 5. See more NFL props here, and new users can also target the FanDuel promo code, which offers new users $300 in bonus bets if their first $5 bet wins:

How to make NFL player prop bets for Washington vs. Green Bay

In addition, the SportsLine Machine Learning Model says another star sails past his total and has nine additional NFL props that are rated four stars or better. You need to see the Machine Learning Model analysis before making any Commanders vs. Packers prop bets for Thursday Night Football.

Which Commanders vs. Packers prop bets should you target for Thursday Night Football? Visit SportsLine now to see the top Commanders vs. Packers props, all from the SportsLine Machine Learning Model.





Source link

Continue Reading

AI Insights

Adobe Says Its AI Sales Are Coming in Strong. But Will It Lift the Stock?

Published

on


Adobe (ADBE) just reported record quarterly revenue driven by artificial intelligence gains. Will it revive confidence in the stock?

The creative software giant late Thursday posted adjusted earnings per share of $5.31 on revenue that jumped 11% year-over-year to a record $5.99 billion in the fiscal third quarter, above analysts’ estimates compiled by Visible Alpha, as AI revenues topped company targets.

CEO Shantanu Narayen said that with the third-quarter’s revenue driven by AI, Adobe has already surpassed its “AI-first” revenue goals for the year, leading the company to boost its outlook. The company said it now anticipates full-year adjusted earnings of $20.80 to $20.85 per share and revenue of $23.65 billion to $23.7 billion, up from adjusted earnings of $20.50 to $20.70 on revenue of $23.50 billion to $23.6 billion previously.

Shares of Adobe were recently rising in late trading. But they’ve had a tough year so far, with the stock down more than 20% for 2025 through Thursday’s close amid worries about the company’s AI progress and growing competition.

Wall Street is optimistic. The shares finished Thursday a bit below $351, and the mean price target as tracked by Visible Alpha, above $461, represents a more than 30% premium. Most of the analysts tracking the stock have “buy” ratings.

But even that target represents a degree of caution in the context of recent highs. The shares were above $600 in February 2024.



Source link

Continue Reading

Trending