Connect with us

AI Research

A False Confidence in the EU AI Act: Epistemic Gaps and Bureaucratic Traps

Published

on


On July 10, 2025, the European Commission released the final draft of the General-Purpose Artificial Intelligence (GPAI) Code of Practice, a “code designed to help industry comply with the AI Act’s rules.” The Code has been under development since October 2024, when the iterative drafting process began after a kick-off plenary in September 2024. The Commission had planned to release the final draft by May 2, 2025, and the subsequent delay has sparked widespread speculation – ranging from concerns about industry lobbying to deeper, more ideological tensions between proponents of innovation and regulation.

However, beyond these narratives, a more fundamental issue emerges: an epistemic and conceptual disconnect at the core of the EU Artificial Intelligence Act (EU AI Act), particularly in its approach to “general-purpose AI” (GPAI). The current version, which includes three chapters covering “Transparency,” “Copyright,” and “Safety and Security,” does not address these core problems.

According to Art. 3(63) of the EU AI Act, a “general-purpose AI model” is:

“an AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are placed on the market”

What the Act refers to here aligns with what the AI research community refers to as a foundation model – a term that, while not perfect, is widely used to describe large-scale models trained on broad datasets to support multiple tasks. Examples of such foundation models include OpenAI’s ChatGPT series, Microsoft’s Magma, Google’s Gemini, and BERT. These models act as bases that can be fine-tuned or adapted for particular use cases.

Crucially, the term “general-purpose AI” (GPAI) did not emerge from within the AI research community. It is a legal construct introduced by the EU AI Act to retroactively define certain types of AI systems. Prior to this, ‘GPAI’ had little to no presence in scholarly discourse. In this sense, the Act not only assigned a regulatory meaning to the term, but also effectively created a new category – one that risks distorting how such systems are actually understood and developed in practice.

The key concepts embedded in the Act reflect a certain epistemic confidence. However, as is the case with GPAI, the terminology does not emerge organically from within the AI field. Instead, GPAI, as defined in the Act, represents an external attempt to impose legal clarity onto a domain that is continuing to evolve. This definitional approach offers a false sense of epistemic certainty and stability, implying that AI systems can be easily classified, analyzed, and understood.

By creating a regulatory label with a largely fixed meaning, the Act constructs a category that may never have been epistemologically coherent to begin with. In doing so, it imposes a rigid legal structure onto a technological landscape characterized by ongoing transformation and ontological and epistemological uncertainty.

The limits of a risk-based framework

The EU AI Act takes a risk-based regulatory approach. Article 3(2) defines risk as “the combination of the probability of an occurrence of harm and the severity of that harm.” This definition draws from classical legal and actuarial traditions, where it is assumed that harms are foreseeable, probabilities can be reasonably assigned, and risks can be assessed accordingly. Yet AI – particularly foundation models – complicates this framework.

The foundation models are characterized by features that are difficult to quantify, including their probabilistic and augmentative nature and interaction with complex socio-technical environments, where harms cannot be clearly predicted. As a result, traditional risk assessment approaches are unable to adequately account for their behavior or impact and can produce a false sense of confidence for regulators.

The legal and epistemic tension is immediately apparent. Law requires a level of certainty – but the nature of AI strongly challenges that very prerequisite. Yet the logic of law remains orthodox, producing an epistemic mismatch between the assumptions embedded in legal instruments and the realities of the technologies they seek to govern.

The EU AI Act’s treatment of “systemic risk” also reflects the influence of the contemporary AI Safety discourse. The very existence of a dedicated “Safety and Security” chapter in the GPAI Code of Practice signals an awareness of debates around the so-called “long-term risks” of advanced models. Terms like systemic risk echo concerns raised by AI Safety researchers: worries about uncontrollable systems, cascading failures, and potential large-scale harms. Yet, crucially, the Act stops short of engaging with the more fundamental concepts of this discourse – such as alignment, control, or corrigibility. Instead, systemic risk is invoked as if it were already a stable regulatory concept, when in reality it is still contested in both technical and governance circles.

The Act’s description of systemic risk, as provided in Art. 3(65) and Recital 110 highlight this conceptual ambiguity. The Act’s definition refers to risks stemming from the “high-impact capabilities of general-purpose AI models,” but both its origins and implications remain unclear. As presented in the Act, systemic risk seems to be rooted in the AI model’s technical attributes: “Systemic risks should be understood to increase with model capabilities and model reach, can arise along the entire lifecycle of the model, and are influenced by conditions of misuse, model reliability, model fairness and model security, the level of autonomy of the model, its access to tools, novel or combined modalities, release and distribution strategies, the potential to remove guardrails and other factors.”

The passage borrows the vocabulary of AI Safety, but without clarifying how these terms connect to actual deployment and socio-technical contexts. The causal relationship between a model’s “capabilities” and systemic risk is assumed rather than demonstrated. By framing systemic risk primarily as a technical property of models, the Act overlooks the crucial role of deployment environments, institutional oversight, and collective governance in shaping real-world harms. This matters because a regulation that treats systemic risk as an intrinsic property of models will default to technical compliance measures, while overlooking the institutional and societal conditions that actually determine whether risks materialize.

Max Weber’s analysis of bureaucracy helps to explain why the aforementioned mismatch between the assumptions embedded in legal instruments and the realities of the technologies was to be expected. Weber described bureaucracy as an “iron cage” of rationalization, reliant on formal rules, hierarchies, and categorical clarity. Bureaucracies require clear categorization, otherwise they cannot function effectively.

The EU AI Act’s precise definitions – such as those for “provider” (Art. 3(3)), “deployer” (Art. 3(4)), and especially “general-purpose AI model” (Art. 3(63)) – reflect this bureaucratic logic. Yet, as Weber warned, this form of rationality can lead to overly rigid and formalized patterns of thought. In treating AI categories as scientifically settled, the Act exemplifies legal formalism that may hinder adaptive governance. The bureaucratic need for clear rules at this stage essentially opposes the anticipated regulatory clarity and instead creates an epistemic gap between the law itself and the state of the art in the area it aims to regulate. For policymakers, the problem is not an academic one: rules that freeze categories too early risk locking Europe into an outdated conceptual framework that will be difficult to revise as AI research advances.

Thomas Kuhn’s theory of scientific revolutions offers further insight. Kuhn described “normal science” as puzzle-solving within “paradigms” – established frameworks that define what counts as a valid question or method. Paradigm shifts occur only when anomalies accumulate and existing frameworks collapse. Today, AI research is undergoing similar developments, with innovations like large language models disrupting prior paradigms. Legal systems, however, operate within their own paradigms, which prioritize stability and continuity. As such, they necessarily lag behind the rapidly evolving world of AI.

Kuhn observed that paradigm shifts are disruptive, unsettling established categories and methods. Law, by contrast, is conservative and resistant to epistemic upheaval. Thus, the scientific paradigm in flux collides with legal orthodoxy’s demand for stable definitions. Although terms like general-purpose AI and systemic risk, and many others, appear fixed within the EU AI Act, they remain unsettled, contested, and context-dependent in practice.

A revealing example comes from a recent talk at the University of Cambridge, where Professor Stuart Russell defined GPAI not as a present reality but as an aspirational concept – a model capable of quickly learning high-quality behavior in any task environment. His description aligns more closely with the notion of “Artificial General Intelligence” than with foundation models such as the GPT series. This diverges sharply from the EU AI Act’s framing, highlighting the epistemic gap between regulatory and scientific domains.

The lesson here is that the Act risks legislating yesterday’s paradigm into tomorrow’s world. Instead of anchoring regulation in fixed categories, policymakers need governance mechanisms that anticipate conceptual change and allow for iterative revision, relying on multidisciplinary monitoring bodies rather than static – and in this case problematic – definitions. Ambiguity in core concepts and definitions, the fragmented character, and an often unconvincing discourse reveal the limits of conventional regulatory logic when applied to emerging technologies. Neither the EU AI Act nor the GPAI Code of Practice was developed within an anticipatory governance framework, which would better accommodate AI’s continuously evolving, transformative nature.

The OECD’s work on anticipatory innovation governance illustrates how such frameworks can function: by combining foresight, experimentation, and adaptive regulation to prepare for multiple possible futures. Experiments in Finland, conducted in collaboration with the OECD and the European Commission, show that anticipatory innovation governance can be embedded directly into core policymaking processes such as budgeting, strategy, and regulatory design, rather than treated as a peripheral exercise. This approach stands in sharp contrast to the EU AI Act’s reliance on fixed categories and definitions: instead of legislating conceptual closure too early, it builds flexibility and iterative review into the very processes of governance. In the AI domain, the OECD’s paper Steering AI’s Future applies these anticipatory principles directly to questions of AI governance.

From this perspective, the delay in releasing the GPAI Code of Practice should not have been seen as a moment of conflict, but rather as an opportunity to consider a more appropriate framework for governing the emerging technologies – one that accepts uncertainty as the norm, relies on adaptive oversight, and treats categories as provisional rather than definitive.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

When you call Donatos, you might be talking to AI

Published

on


If you call Donatos Pizza to place an order, you might be speaking with artificial intelligence.

The Columbus-based pizza chain announced that it has completed a systemwide rollout of voice-ordering technology powered by Revmo AI. The company says the system is now live at all 174 Donatos locations and has already handled more than 301,000 calls since June.

Donatos Reports Higher Order Accuracy, More Efficient Operations

According to Donatos, the AI system has converted 71% of calls into orders, up from 58% before the rollout, and has achieved 99.9% order accuracy. The company also says the switch freed up nearly 5,000 hours of staff time in August alone, allowing employees to focus more on preparing food and serving in-store customers.

“Our focus was simple: deliver a better guest experience on the phone and increase order conversions,” Kevin King, President of Donatos Pizza, said in a statement.

Ben Smith, Donatos’ Director of Operations Development, said the change provided immediate relief on the phones, allowing staff to redirect time to order accuracy and hospitality.

Donatos said it plans to expand the system to handle more types of calls and to make greater use of its centralized answering center. The company did not say whether it plans to reduce call center staffing or rely more heavily on automation in the future.

Other chains report trouble with AI ordering systems

Taco Bell recently started re-evaluating its used of AI to take orders in the drive-thru after viral videos exposed its flaws. In one well-known video, a man crashed the system by ordering 18,000 cups of water. The company is now looking at how AI can help during busy times and when it’s appropriate for a human employee to step in and take the order.

Last year, McDonald’s ended its AI test in 100 restaurants after similar problems surfaced. In one case, AI added bacon to a customer’s ice cream. A McDonald’s executive told the BBC that artificial intelligence will still be part of the chain’s future.



Source link

Continue Reading

AI Research

First gallbladder surgery performed with help of AI-guided robot

Published

on


The first autonomous surgery guided by artificial intelligence is performed in Chile. Photo courtesy of Levita Magnetics

SANTIAGO, Chile, Sept. 12 (UPI) — Surgeons in Chile performed a pioneering gallbladder operation with the support of the MARS platform, a system that combines precision robotic technology with artificial intelligence.

For the surgery performed on Monday, a robot held a magnet that moves instruments inside the patient with one of its arms, while the other arm carried an autonomous surgical camera guided by AI.

The camera automatically zooms in and out and follows the surgeon’s movements, giving the medical team a clearer, uninterrupted view throughout the procedure without manual adjustments.

Normally, these types of surgeries require an assistant to adjust the camera at the surgeon’s request. MARS advanced on that model by allowing the surgeon to control the camera directly with hand or foot movements.

“It’s not that we didn’t already have a good view of what we were doing, but this is an added advantage. The camera follows the surgeon’s movements and can also be stopped and controlled manually, but it is trained to work autonomously, Dr. Ricardo Funke, head of surgery at Clínica Las Condes, who led the procedure, told UPI.

“This allows us to maintain a stable, high-quality view and to see with great precision what we’re doing during the operation.”

He added: “This is the first case in the world in which we have used artificial intelligence technology that is proven and safe for patients. Years ago, it was unthinkable that AI could be part of our daily work, and it is an area that will continue to advance.”

Dr. Matías Sepúlveda, president of the Chilean Society of Bariatric and Metabolic Surgery and a digestive surgeon at the private Clínica Las Condes, said the new technique avoids making additional incisions in an operation that is already minimally invasive.

With these advances, the goal is to have a positive impact on patients.

“That means less pain, faster recovery and lower costs for health institutions by reducing the number of assistants or surgeons needed during the operation. This is only the beginning, but it will have a major impact on what we do,” said added.

The technology was developed by Levita Magnetics, a Chilean medical startup based in Mountain View, Calif., that specializes in minimally invasive solutions for assisted surgery and magnetic technology.

In 2023, the company received authorization from the U.S. Food and Drug Administration for its MARS platform to be used in abdominal surgeries such as gastric sleeve procedures. In June, its use was expanded to bariatric repairs and hernia surgeries, which are among the most common procedures.

“We decided to perform the surgery in Chile because it is our base of operations. This is the first surgery of its kind, and our goal is to expand the use of AI everywhere our MARS robot is available, in both Chile and the United States,” Dr. Alberto Rodríguez, founder of Levita Magnetics, told UPI.

He added that this milestone is only the beginning of surgical autonomy. “Robots, which embody AI, will take on an increasingly important role in different stages of operations, allowing for safer procedures, more efficient surgeons and, ultimately, better outcomes for patients.”



Source link

Continue Reading

AI Research

NSF Launches Effort to Establish National AI Research Operations Center

Published

on


The U.S. National Science Foundation (NSF) has announced a solicitation to establish a National Artificial Intelligence Research Resource Operations Center (NAIRR-OC), a move aimed at transitioning the National AI Research Resource (NAIRR) from its pilot phase to a sustainable national program.

Launched in 2024 as a public-private partnership, the NAIRR pilot provides researchers with access to advanced computational, data, model, and training resources. Despite rapid AI progress, many researchers and educators still lack the tools needed to explore fundamental AI questions and train students effectively.

So far, the pilot has connected more than 400 research teams with computing platforms, datasets, software, and models, driving innovation across agriculture, drug discovery, cybersecurity, and education. Backed by 14 federal agencies and 28 industry and nonprofit partners, it has accelerated U.S. leadership in responsible AI development.

“We look forward to continued collaboration with private sector and agency partners, whose contributions have been critical in demonstrating the innovation and scientific impact that comes when critical AI resources are made accessible to research and education communities across the country,” Katie Antypas, director of the NSF Office of Advanced Cyberinfrastructure, said.

The new operations center will expand these capabilities by creating a centralized framework for governance, integrating advanced computing and data resources, and launching a web portal for streamlined access. It also aims to strengthen outreach and collaboration within the AI research community.



Source link

Continue Reading

Trending