AI Insights
Compiling the Future of U.S. Artificial Intelligence Regulation
Experts examine the benefits and pitfalls of AI regulation.
Recently, the U.S. House of Representatives, voting along party lines, passed H.R. 1—colloquially known as the “One Big Beautiful Bill Act.” If enacted, H.R.1 would pause any state or local regulations affecting artificial intelligence (AI) models or research for ten years.
Over the past several years, AI tools—from chatbots like ChatGPT and DeepSeek to sophisticated video-generating software such as Alphabet Inc.’s Veo 3—have gained widespread consumer acceptance. Approximately 40 percent of Americans use AI tools daily. These tools continue to improve rapidly, becoming more usable and useful for average consumers and corporate users alike.
Optimistic projections suggest that the continued adoption of AI could lead to trillions of dollars of economic growth. Unlocking the benefits of AI, however, undoubtedly requires meaningful social and economic adjustments in the face of new employment, cyber-security and information-consumption patterns. Experts estimate that widespread AI implementation could displace or transform approximately 40 percent of existing jobs. Some analysts warn that without robust safety nets or reskilling programs, this displacement could exacerbate existing inequalities, particularly for low-income workers and communities of color and between more and less developed nations.
Given the potential for dramatic and widespread economic displacement, national and state governments, human rights watchdog groups, and labor unions increasingly support greater regulatory oversight of the emerging AI sector.
The data center infrastructure required to support current AI tools already consumes as much electricity as the eleventh-largest national market—rivaling that of France. Continued growth in the AI sector necessitates ever-greater electricity generation and storage capacity, creating significant potential for environmental impact. In addition to electricity use, AI development consumes large amounts of water for cooling, raising further sustainability concerns in water-scarce regions.
Industry insiders and critics alike note that overly broad training parameters and flawed or unrepresentative data can lead models to embed harmful stereotypes and mimic human biases. These biases lead critics to call for strict regulation of AI implementation in policing, national security, and other policy contexts.
Polling shows that American voters desire more regulation of AI companies, including limiting the training data AI models can employ, imposing environmental-impact taxes on AI companies, and outright banning AI implementation in some sectors of the economy.
Nonetheless, there is little consensus among academics, industry insiders, and legislators as to whether—much less how—the emerging AI sector should be regulated.
In this week’s Saturday Seminar, scholars discuss the need for AI regulation and the benefits and drawbacks of centralized federal oversight.
- In an article in the Stanford Emerging Technology Review 2025, Fei-Fei Li, Christopher Manning, and Anka Reuel of Stanford University, argue that federal regulation of AI may undermine U.S. leadership in the field by locking in rigid rules before key technologies have matured. Li, Manning, and Reuel caution that centralized regulation, especially of general-purpose AI models, risks discouraging competition, entrenching dominant firms, and shutting out third-party researchers. Instead, Li, Manning, and Reuel call for flexible regulatory models drawing on existing sectoral rules and voluntary governance to address use-specific risks. Such an approach, Li, Manning, and Reuel suggest, would better preserve the benefits of regulatory flexibility while maintaining targeted oversight of areas of greatest risk.
- In a paper in the Common Market Law Review, Philipp Hacker, a professor at the European University Viadrina, argues that AI regulation must weigh the significant climate impacts of machine learning technologies. Hacker highlights the substantial energy and water consumption needed to train large generative models such as GPT-4. Critiquing current European Union regulatory frameworks, including the General Data Protection Regulation and the then-proposed EU AI Act, Hacker urges policy reforms that move beyond transparency toward incorporating sustainability in design and consumption caps tied to emissions trading schemes. Finally, Hacker proposes these sustainable AI regulatory strategies as a broader blueprint for the environmentally conscious development of emerging technologies, such as blockchain and the Metaverse.
- The Cato Institute’s, David Inserra, warns that government-led efforts to regulate AI could undermine free expression. In a recent briefing paper, Inserra explains that regulatory schemes often target content labeled as misinformation or hate speech—efforts that can lead to AI systems reflecting narrow ideological norms. Inserra cautions that such rules may entrench dominant companies and crowd out AI products designed to reflect a wider range of views. Inserra calls for a flexible approach grounded in soft law, such as voluntary codes of conduct and third-party standards, to allow for the development of AI tools that support diverse expressions of speech.
- In an article in the North Carolina Law Review, Erwin Chemerinsky, the Dean of UC Berkeley Law, and practitioner Alex Chemerinsky argue that state regulation of a closely related field—internet content moderation more broadly—is constitutionally problematic and bad policy. Drawing on precedents including Miami Herald v. Tornillo and Hurley v. Irish-American Gay Group, Chemerinsky and Chemerinsky contend that many state laws restricting or requiring content moderation violate First Amendment editorial discretion protections. Chemerinsky and Chemerinsky further argue that federal law preempts most state content moderation regulations. The Chemerinskys warn that allowing multiple state regulatory schemes would create a “lowest-common-denominator” problem where the most restrictive states effectively control nationwide internet speech, undermining the editorial rights of platforms and the free expression of their users.
- In a forthcoming chapter, John Yun, of Antonin Scalia Law School at George Mason University, cautions against premature regulation of AI. Yun argues that overly restrictive AI regulations risk stifling innovation and could lead to long-term social costs outweighing any short-term benefits gained from mitigating immediate harms. Drawing parallels with the early days of internet regulation, Yun emphasizes that premature interventions could entrench market incumbents, limit competition, and crowd out potentially superior market-driven solutions to emerging risks. Instead, Yun advocates applying existing laws of general applicability to AI and maintaining a regulatory restraint similar to the approach adopted during the formative early years of the internet.
- In a forthcoming article in the Journal of Learning Analytics, Rogers Kaliisa of the University of Oslo and several coauthors examine how the diversity of AI regulations across different countries creates an “uneven storm” for learning analytics research. Kaliisa and his coauthors analyze how comprehensive EU regulations such as their AI Act, U.S. sector-specific approaches, and China’s algorithm disclosure requirements impose different restrictions on the use of educational data in AI research. Kaliisa and his team warn that strict rules—particularly the EU’s ban on emotion recognition and biometric sensors—may limit innovative AI applications, widening global inequalities in educational AI development. The Kaliisa team proposes that experts engage with policymakers to develop frameworks that balance innovation with ethical safeguards across borders.
The Saturday Seminar is a weekly feature that aims to put into written form the kind of content that would be conveyed in a live seminar involving regulatory experts. Each week, The Regulatory Review publishes a brief overview of a selected regulatory topic and then distills recent research and scholarly writing on that topic.
AI Insights
Ascendion Wins Gold as the Artificial Intelligence Service Provider of the Year in 2025 Globee® Awards
- Awarded Gold for excellence in real-world AI implementation and measurable enterprise outcomes
- Recognized for agentic AI innovation through ASCENDION AAVA platform, accelerating software delivery and unlocking business value at scale
- Validated as a category leader in operationalizing AI across enterprise ecosystems—from generative and ethical AI to machine learning and NLP—delivering productivity, transparency, and transformation
BASKING RIDGE, N.J., July 7, 2025 /PRNewswire/ — Ascendion, a leader in AI-powered software engineering, has been awarded Gold as the Artificial Intelligence Service Provider of the Year in the 2025 Globee® Awards for Artificial Intelligence. This prestigious honor recognizes Ascendion’s bold leadership in delivering practical, enterprise-grade AI solutions that drive measurable business outcomes across industries.
The Globee® Awards for Artificial Intelligence celebrate breakthrough achievements across the full spectrum of AI technologies including machine learning, natural language processing, generative AI, and ethical AI. Winners are recognized for setting new standards in transforming industries, enhancing user experiences, and solving real-world problems with artificial intelligence (AI).
“This recognition validates more than our AI capabilities. It confirms the bold vision that drives Ascendion,” said Karthik Krishnamurthy, Chief Executive Officer, Ascendion. “We’ve been engineering the future with AI long before it became a buzzword. Today, our clients aren’t chasing trends; they’re building what’s next with us. This award proves that when you combine powerful AI platforms, cutting-edge technology, and the relentless pursuit of meaningful outcomes, transformation moves from promise to fact. That’s Engineering to the Power of AI in action.”
Ascendion earned this recognition by driving real-world impact with its ASCENDION AAVA platform and agentic AI capabilities, transforming enterprise software development and delivery. This strategic approach enables clients to modernize engineering workflows, reduce technical debt, increase transparency, and rapidly turn AI innovation into scalable, market-ready solutions. Across industries like banking and financial services, healthcare and life sciences, retail and consumer goods, high-tech, and more, Ascendion is committed to helping clients move beyond experimentation to build AI-first systems that deliver real results.
“The 2025 winners reflect the innovation and forward-thinking mindset needed to lead in AI today,” said San Madan, President of the Globee® Awards. “With organizations across the globe engaging in data-driven evaluations, this recognition truly reflects broad industry endorsement and validation.”
About Ascendion
Ascendion is a leading provider of AI-powered software engineering solutions that help businesses innovate faster, smarter, and with greater impact. We partner with over 400 Global 2000 clients across North America, APAC, and Europe to tackle complex challenges in applied AI, cloud, data, experience design, and workforce transformation. Powered by +11,000 experts, a bold culture, and our proprietary Engineering to the Power of AI (EngineeringAI) approach, we deliver outcomes that build trust, unlock value, and accelerate growth. Headquartered in New Jersey, with 40+ global offices, Ascendion combines scale, agility, and ingenuity to engineer what’s next. Learn more at https://ascendion.com.
Engineering to the Power of AI™, AAVA™, EngineeringAI, Engineering to Elevate Life™, DataAI, ExperienceAI, Platform EngineeringAI, Product EngineeringAI, and Quality EngineeringAI are trademarks or service marks of Ascendion®. AAVA™ is pending registration. Unauthorized use is strictly prohibited.
About the Globee® Awards
The Globee® Awards present recognition in ten programs and competitions, including the Globee® Awards for Achievement, Globee® Awards for Artificial Intelligence, Globee® Awards for Business, Globee® Awards for Excellence, Globee® Awards for Cybersecurity, Globee® Awards for Disruptors, Globee® Awards for Impact. Globee® Awards for Innovation (also known as Golden Bridge Awards®), Globee® Awards for Leadership, and the Globee® Awards for Technology. To learn more about the Globee Awards, please visit the website: https://globeeawards.com.
SOURCE Ascendion
AI Insights
Overcoming the Traps that Prevent Growth in Uncertain Times
July 7, 2025
Today, with uncertainty a seemingly permanent condition, executives need to weave adaptability, resilience, and clarity into their operating plans. The best executives will implement strategies that don’t just sustain their businesses; they enable growth.
AI Insights
AI-driven CDR: The shield against modern cloud threats
Cloud computing is the backbone of modern enterprise innovation, but with speed and scalability comes a growing storm of cyber threats. Cloud adoption continues to skyrocket. In fact, by 2028, cloud-native platforms will serve as the foundation for more than 95% of new digital initiatives. The traditional perimeter has all but disappeared. The result? A significantly expanded attack surface and a growing volume of threats targeting cloud workloads.
Studies tell us that 80% of security exposures now originate in the cloud, and threats targeting cloud environments have recently increased by 66%, underscoring the urgency for security strategies purpose-built for this environment. The reality for organizations is stark. Legacy tools designed for static, on-premises architectures can’t keep up. What’s needed is a new approach—one that’s intelligent, automated, and cloud-native. Enter AI-driven cloud detection and response (CDR).
Why legacy tools fall short
Traditional security approaches leave organizations exposed. Posture management has been the foundation of cloud security, helping teams identify misconfigurations and enforce compliance. Security risks, however, don’t stop at misconfigurations or vulnerabilities.
- Limited visibility: Cloud assets are ephemeral, spinning up and down in seconds. Legacy tools lack the telemetry and agility to provide continuous, real-time visibility.
- Operational silos: Disconnected cloud and SOC operations create blind spots and slow incident response.
- Manual burden: Analysts are drowning in alerts. Manual triage can’t scale with the velocity and complexity of cloud-native threats.
- Delayed response: In today’s landscape, every second counts. 60% of organizations take longer than four days to resolve cloud security issues.
The AI-powered CDR advantage
AI-powered CDR solves these challenges by combining the speed of automation with the intelligence of machine learning—offering CISOs a modern, proactive defense. Organizations need more than static posture security. They need real-time prevention.
Real-time threat prevention detection: AI engines analyze vast volumes of telemetry in real time—logs, flow data, behavior analytics. The full context this provides enables the detection and prevention of threats as they unfold. Organizations with AI-enhanced detection reduced breach lifecycle times by more than 100 days.
Unified security operations: CDR solutions bridge the gap between cloud and SOC teams by centralizing detection and response across environments, which eliminates redundant tooling and fosters collaboration, both essential when dealing with fast-moving incidents.
Context-rich insights: Modern CDR solutions deliver actionable insights enriched with context—identifying not just the issue, but why the issue matters. It empowers teams to prioritize effectively, slashing false positives and accelerating triage.
Intelligent automation: From context enrichment to auto-containment of compromised workloads, AI-enabled automation reduces the manual load on analysts and improves response rates.
The path forward
Organizations face unprecedented pressure to secure fast-changing cloud environments without slowing innovation. Relying on outdated security stacks is no longer viable. Cortex Cloud CDR from Palo Alto Networks delivers the speed, context, and intelligence required to defend against the evolving threat landscape. With over 10,000 detectors and 2,600+ machine learning models, Cortex Cloud CDR identifies and prevents high-risk threats with precision.
It’s time to shift from reactive defense to proactive protection. AI-driven CDR isn’t just another tool—it’s the cornerstone of modern cloud security strategy. And for CISOs, it’s the shield your organization needs to stay resilient in the face of tomorrow’s threats.
-
Funding & Business7 days ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Jobs & Careers6 days ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Mergers & Acquisitions6 days ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies
-
Funding & Business6 days ago
Rethinking Venture Capital’s Talent Pipeline
-
Jobs & Careers6 days ago
Why Agentic AI Isn’t Pure Hype (And What Skeptics Aren’t Seeing Yet)
-
Funding & Business4 days ago
Sakana AI’s TreeQuest: Deploy multi-model teams that outperform individual LLMs by 30%
-
Funding & Business7 days ago
From chatbots to collaborators: How AI agents are reshaping enterprise work
-
Jobs & Careers6 days ago
Astrophel Aerospace Raises ₹6.84 Crore to Build Reusable Launch Vehicle
-
Jobs & Careers6 days ago
Telangana Launches TGDeX—India’s First State‑Led AI Public Infrastructure
-
Jobs & Careers4 days ago
Ilya Sutskever Takes Over as CEO of Safe Superintelligence After Daniel Gross’s Exit