Connect with us

AI Insights

Regulatory Policy and Practice on AI’s Frontier

Published

on


Adaptive, expert-led regulation can unlock the promise of artificial intelligence.

Technological breakthroughs, historically, have played a distinctive role in accelerating economic growth, expanding opportunity, and enhancing standards of living. Technology enables us to get more out of the knowledge we have and prior scientific discoveries, in addition to generating new insights that enable new inventions. Technology is associated with new jobs, higher incomes, greater wealth, better health, educational improvements, time-saving devices, and many other concrete gains that improve people’s day-to-day lives. The benefits of technology, however, are not evenly distributed, even when an economy is more productive and growing overall. When technology is disruptive, costs and dislocations are shouldered by some more than others, and periods of transition can be difficult.

Theory and experience teach that innovative technology does not automatically improve people’s station and situation merely by virtue of its development. The way technology is deployed and the degree to which gains are shared—in other words, turning technology’s promise into reality without overlooking valid concerns—depends, in meaningful part, on the policy, regulatory, and ethical decisions we make as a society.

Today, these decisions are front and center for artificial intelligence (AI).

AI’s capabilities are remarkable, with profound implications spanning health care, agriculture, financial services, manufacturing, education, energy, and beyond. The latest research is demonstrably pushing AI’s frontier, advancing AI-based reasoning and AI’s performance of complex multistep tasks, and bringing us closer to artificial general intelligence (high-level intelligence and reasoning that allows AI systems to autonomously perform highly complex tasks at or beyond human capacity in many diverse instances and settings). Advanced AI systems, such as AI agents (AI systems that autonomously complete tasks toward identified objectives), are leading to fundamentally new opportunities and ways of doing things, which can unsettle the status quo, possibly leading to major transformations.

In our view, AI should be embraced while preparing for the change it brings. This includes recognizing that the pace and magnitude of AI breakthroughs are faster and more impactful than anticipated. A terrific indication of AI’s promise is the 2024 Nobel Prize in chemistry, winners of which used AI to “crack the code” of protein structures, “life’s ingenious chemical tools.” At the same time, as AI becomes widely used, guardrails, governance, and oversight should manage risks, safeguard values, and look out for those disadvantaged by disruption.

Government can help fuel the beneficial development and deployment of AI in the United States by shaping a regulatory environment conducive to AI that fosters the adoption of goods, services, practices, processes, and tools leveraging AI, in addition to encouraging AI research.

It starts with a pro-innovation policy agenda. Once the goal of promoting AI is set, the game plan to achieve it must be architected and implemented. Operationalizing policy into concrete progress can be difficult and more challenging when new technology raises novel questions infused with subtleties.

Regulatory agencies that determine specific regulatory requirements and enforce compliance play a significant part in adapting and administering regulatory regimes that encourage rather than stifle technology. Pragmatic regulation compatible with AI is instrumental so that regulation is workable as applied to AI-led innovation, further unlocking AI’s potential. Regulators should be willing to allow businesses flexibility to deploy AI-centered uses that challenge traditional approaches and conventions. That said, regulators’ critical mission of detecting and preventing harmful behavior should not be cast aside. Properly calibrated governance, guardrails, and oversight that prudently handle misuse and misconduct can support technological advancement and adoption over time.

Regulators can achieve core regulatory objectives, including, among other things, consumer protection, investor protection, and health and safety, without being anchored to specific regulatory requirements if the requirements—fashioned when agentic and other advanced AI was not contemplated—are inapt in the context of current and emerging AI.

We are not implying that vital governmental interests that are foundational to many regulatory regimes should be jettisoned. Rather, it is about how those interests are best achieved as technology changes, perhaps dramatically. It is about regulating in a way that allows AI to reach its promise and ensuring that essential safeguards are in place to protect persons from wrongdoing, abuses, and harms that could frustrate AI’s real-world potential by undercutting trust in—and acceptance of—AI. It is about fostering a regulatory environment that allows for constructive AI-human collaboration—including using AI agents to help monitor other AI agents while humans remain actively involved addressing nuances, responding to an AI agent’s unanticipated performance, engaging matters of greatest agentic AI uncertainty, and resolving tough calls that people can uniquely evaluate given all that human judgment embodies.

This takes modernizing regulation—in its design, its detail, its application, and its clarity—to work, very practically, in the context of AI by accommodating AI’s capabilities.

Accomplishing this type of regulatory modernity is not easy. It benefits from combining technological expertise with regulatory expertise. When integrated, these dual perspectives assist regulatory agencies in determining how best to update regulatory frameworks and specific regulatory requirements to accommodate expected and unexpected uses of advanced AI. Even when underpinning regulatory goals do not change, certain decades-old—or newer—regulations may not fit with today’s technology, let alone future technological breakthroughs. In addition, regulatory updates may be justified in light of regulators’ own use of AI to improve regulatory processes and practices, such as using AI agents to streamline permitting, licensing, registration, and other types of approvals.

Regulatory agencies are filled with people who bring to bear valuable experience, knowledge, and skill concerning agency-specific regulatory domains, such as financial services, antitrust, food, pharmaceuticals, agriculture, land use, energy, the environment, and consumer products. That should not change.

But the commissions, boards, departments, and other agencies that regulate so much of the economy and day-to-day life—the administrative state—should have more technological expertise in-house relevant to AI. AI’s capabilities are materially increasing at a rapid clip, so staying on top of what AI can do and how it does it—including understanding leading AI system architecture and imagining how AI might be deployed as it advances toward its frontier—is difficult. Without question, there are individuals across government with impressive technological chops, and regulators have made commendable strides keeping apprised of technological innovation. Indeed, certain parts of government are inherently technology-focused. Many regulatory agencies are not, however; but even at those agencies, in-depth understanding of AI is increasingly important.

Regulatory agencies should bring on board more individuals with technology backgrounds from the private sector, academia, research institutions, think tanks, and elsewhere—including computer scientists, physicists, software engineers, AI researchers, cryptographers, and the like.

For example, we envision a regulatory agency’s lawyers working closely with its AI engineers to ensure that regulatory requirements contemplate and factor in AI. Lawyers with specific regulatory knowledge can prompt large language models to measure a model’s interpretation of legal and regulatory obligations. Doing this systematically and with a large enough sample size requires close collaboration with AI engineers to automate the analysis and benchmark a model’s results. AI engineers could partner with an agency’s regulatory experts in discerning the technological capabilities of frontier AI systems to comport with identified regulatory objectives in order to craft regulatory requirements that account for and accommodate the use of AI in consequential contexts. AI could accelerate various regulatory functions that typically have taken considerable time for regulators to perform because they have demanded significant human involvement. To illustrate, regulators could use AI agents to assist the review of permitting, licensing, and registration applications that individuals and businesses must obtain before engaging in certain activities, closing certain transactions, or marketing and selling certain products. Regulatory agencies could augment humans by using AI systems to conduct an initial assessment of applications and other requests against regulatory requirements.

The more regulatory agencies have the knowledge and experience of technologists in-house, the more understanding regulatory agencies will gain of cutting-edge AI. When that enriched technological insight is combined with the breadth of subject-matter expertise agencies already possess, regulatory agencies will be well-positioned to modernize regulation that fosters innovation while preserving fundamental safeguards. Sophisticated technological know-how can help guide regulators’ decisions concerning how best to revise specific regulatory features so that they are workable with AI and conducive to technological progress. The technical elements of regulation should be informed by the technical elements of AI to ensure practicable alignment between regulation and AI, allowing AI innovation to flourish without incurring undue risks.

With more in-house technological expertise, we think regulatory agencies will grow increasingly comfortable making the regulatory changes needed to accommodate, if not accelerate, the development and adoption of advanced AI.

There is more to technological progress that propels economic growth than technological capability in and of itself. An administrative state that is responsive to the capabilities of AI—including those on AI’s expanding frontier—could make a big difference converting AI’s promise into reality, continuing the history of technological breakthroughs that have improved people’s lives for centuries.

Troy A. Paredes



Source link

AI Insights

Ascendion Wins Gold as the Artificial Intelligence Service Provider of the Year in 2025 Globee® Awards

Published

on


  • Awarded Gold for excellence in real-world AI implementation and measurable enterprise outcomes
  • Recognized for agentic AI innovation through ASCENDION AAVA platform, accelerating software delivery and unlocking business value at scale
  • Validated as a category leader in operationalizing AI across enterprise ecosystems—from generative and ethical AI to machine learning and NLP—delivering productivity, transparency, and transformation

BASKING RIDGE, N.J., July 7, 2025 /PRNewswire/ — Ascendion, a leader in AI-powered software engineering, has been awarded Gold as the Artificial Intelligence Service Provider of the Year in the 2025 Globee® Awards for Artificial Intelligence. This prestigious honor recognizes Ascendion’s bold leadership in delivering practical, enterprise-grade AI solutions that drive measurable business outcomes across industries.

The Globee® Awards for Artificial Intelligence celebrate breakthrough achievements across the full spectrum of AI technologies including machine learning, natural language processing, generative AI, and ethical AI. Winners are recognized for setting new standards in transforming industries, enhancing user experiences, and solving real-world problems with artificial intelligence (AI).

“This recognition validates more than our AI capabilities. It confirms the bold vision that drives Ascendion,” said Karthik Krishnamurthy, Chief Executive Officer, Ascendion. “We’ve been engineering the future with AI long before it became a buzzword. Today, our clients aren’t chasing trends; they’re building what’s next with us. This award proves that when you combine powerful AI platforms, cutting-edge technology, and the relentless pursuit of meaningful outcomes, transformation moves from promise to fact. That’s Engineering to the Power of AI in action.”

Ascendion earned this recognition by driving real-world impact with its ASCENDION AAVA platform and agentic AI capabilities, transforming enterprise software development and delivery. This strategic approach enables clients to modernize engineering workflows, reduce technical debt, increase transparency, and rapidly turn AI innovation into scalable, market-ready solutions. Across industries like banking and financial services, healthcare and life sciences, retail and consumer goods, high-tech, and more, Ascendion is committed to helping clients move beyond experimentation to build AI-first systems that deliver real results.

“The 2025 winners reflect the innovation and forward-thinking mindset needed to lead in AI today,” said San Madan, President of the Globee® Awards. “With organizations across the globe engaging in data-driven evaluations, this recognition truly reflects broad industry endorsement and validation.”

About Ascendion

Ascendion is a leading provider of AI-powered software engineering solutions that help businesses innovate faster, smarter, and with greater impact. We partner with over 400 Global 2000 clients across North America, APAC, and Europe to tackle complex challenges in applied AI, cloud, data, experience design, and workforce transformation. Powered by +11,000 experts, a bold culture, and our proprietary Engineering to the Power of AI (EngineeringAI) approach, we deliver outcomes that build trust, unlock value, and accelerate growth. Headquartered in New Jersey, with 40+ global offices, Ascendion combines scale, agility, and ingenuity to engineer what’s next. Learn more at https://ascendion.com.

Engineering to the Power of AI™, AAVA™, EngineeringAI, Engineering to Elevate Life™, DataAI, ExperienceAI, Platform EngineeringAI, Product EngineeringAI, and Quality EngineeringAI are trademarks or service marks of Ascendion®. AAVA™ is pending registration. Unauthorized use is strictly prohibited.

About the Globee® Awards
The Globee® Awards present recognition in ten programs and competitions, including the Globee® Awards for Achievement, Globee® Awards for Artificial Intelligence, Globee® Awards for Business, Globee® Awards for Excellence, Globee® Awards for Cybersecurity, Globee® Awards for Disruptors, Globee® Awards for Impact. Globee® Awards for Innovation (also known as Golden Bridge Awards®), Globee® Awards for Leadership, and the Globee® Awards for Technology. To learn more about the Globee Awards, please visit the website: https://globeeawards.com.

SOURCE Ascendion



Source link

Continue Reading

AI Insights

Overcoming the Traps that Prevent Growth in Uncertain Times

Published

on


July 7, 2025

Today, with uncertainty a seemingly permanent condition, executives need to weave adaptability, resilience, and clarity into their operating plans. The best executives will implement strategies that don’t just sustain their businesses; they enable growth.





Source link

Continue Reading

AI Insights

AI-driven CDR: The shield against modern cloud threats

Published

on


Cloud computing is the backbone of modern enterprise innovation, but with speed and scalability comes a growing storm of cyber threats. Cloud adoption continues to skyrocket. In fact, by 2028, cloud-native platforms will serve as the foundation for more than 95% of new digital initiatives. The traditional perimeter has all but disappeared. The result? A significantly expanded attack surface and a growing volume of threats targeting cloud workloads.

Studies tell us that 80% of security exposures now originate in the cloud, and threats targeting cloud environments have recently increased by 66%, underscoring the urgency for security strategies purpose-built for this environment. The reality for organizations is stark. Legacy tools designed for static, on-premises architectures can’t keep up. What’s needed is a new approach—one that’s intelligent, automated, and cloud-native. Enter AI-driven cloud detection and response (CDR).

Why legacy tools fall short

Traditional security approaches leave organizations exposed. Posture management has been the foundation of cloud security, helping teams identify misconfigurations and enforce compliance. Security risks, however, don’t stop at misconfigurations or vulnerabilities.

  • Limited visibility: Cloud assets are ephemeral, spinning up and down in seconds. Legacy tools lack the telemetry and agility to provide continuous, real-time visibility.
  • Operational silos: Disconnected cloud and SOC operations create blind spots and slow incident response.
  • Manual burden: Analysts are drowning in alerts. Manual triage can’t scale with the velocity and complexity of cloud-native threats.
  • Delayed response: In today’s landscape, every second counts. 60% of organizations take longer than four days to resolve cloud security issues.

The AI-powered CDR advantage

AI-powered CDR solves these challenges by combining the speed of automation with the intelligence of machine learning—offering CISOs a modern, proactive defense. Organizations need more than static posture security. They need real-time prevention.

Real-time threat prevention detection: AI engines analyze vast volumes of telemetry in real time—logs, flow data, behavior analytics. The full context this provides enables the detection and prevention of threats as they unfold. Organizations with AI-enhanced detection reduced breach lifecycle times by more than 100 days.

Unified security operations: CDR solutions bridge the gap between cloud and SOC teams by centralizing detection and response across environments, which eliminates redundant tooling and fosters collaboration, both essential when dealing with fast-moving incidents.

Context-rich insights: Modern CDR solutions deliver actionable insights enriched with context—identifying not just the issue, but why the issue matters. It empowers teams to prioritize effectively, slashing false positives and accelerating triage.

Intelligent automation: From context enrichment to auto-containment of compromised workloads, AI-enabled automation reduces the manual load on analysts and improves response rates.

The path forward

Organizations face unprecedented pressure to secure fast-changing cloud environments without slowing innovation. Relying on outdated security stacks is no longer viable. Cortex Cloud CDR from Palo Alto Networks delivers the speed, context, and intelligence required to defend against the evolving threat landscape. With over 10,000 detectors and 2,600+ machine learning models, Cortex Cloud CDR identifies and prevents high-risk threats with precision.

It’s time to shift from reactive defense to proactive protection. AI-driven CDR isn’t just another tool—it’s the cornerstone of modern cloud security strategy. And for CISOs, it’s the shield your organization needs to stay resilient in the face of tomorrow’s threats.



Source link

Continue Reading

Trending