AI Research
A False Confidence in the EU AI Act: Epistemic Gaps and Bureaucratic Traps

On July 10, 2025, the European Commission released the final draft of the General-Purpose Artificial Intelligence (GPAI) Code of Practice, a “code designed to help industry comply with the AI Act’s rules.” The Code has been under development since October 2024, when the iterative drafting process began after a kick-off plenary in September 2024. The Commission had planned to release the final draft by May 2, 2025, and the subsequent delay has sparked widespread speculation – ranging from concerns about industry lobbying to deeper, more ideological tensions between proponents of innovation and regulation.
However, beyond these narratives, a more fundamental issue emerges: an epistemic and conceptual disconnect at the core of the EU Artificial Intelligence Act (EU AI Act), particularly in its approach to “general-purpose AI” (GPAI). The current version, which includes three chapters covering “Transparency,” “Copyright,” and “Safety and Security,” does not address these core problems.
The legal invention of general-purpose AI
According to Art. 3(63) of the EU AI Act, a “general-purpose AI model” is:
“an AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are placed on the market”
What the Act refers to here aligns with what the AI research community refers to as a foundation model – a term that, while not perfect, is widely used to describe large-scale models trained on broad datasets to support multiple tasks. Examples of such foundation models include OpenAI’s ChatGPT series, Microsoft’s Magma, Google’s Gemini, and BERT. These models act as bases that can be fine-tuned or adapted for particular use cases.
Crucially, the term “general-purpose AI” (GPAI) did not emerge from within the AI research community. It is a legal construct introduced by the EU AI Act to retroactively define certain types of AI systems. Prior to this, ‘GPAI’ had little to no presence in scholarly discourse. In this sense, the Act not only assigned a regulatory meaning to the term, but also effectively created a new category – one that risks distorting how such systems are actually understood and developed in practice.
The key concepts embedded in the Act reflect a certain epistemic confidence. However, as is the case with GPAI, the terminology does not emerge organically from within the AI field. Instead, GPAI, as defined in the Act, represents an external attempt to impose legal clarity onto a domain that is continuing to evolve. This definitional approach offers a false sense of epistemic certainty and stability, implying that AI systems can be easily classified, analyzed, and understood.
By creating a regulatory label with a largely fixed meaning, the Act constructs a category that may never have been epistemologically coherent to begin with. In doing so, it imposes a rigid legal structure onto a technological landscape characterized by ongoing transformation and ontological and epistemological uncertainty.
The limits of a risk-based framework
The EU AI Act takes a risk-based regulatory approach. Article 3(2) defines risk as “the combination of the probability of an occurrence of harm and the severity of that harm.” This definition draws from classical legal and actuarial traditions, where it is assumed that harms are foreseeable, probabilities can be reasonably assigned, and risks can be assessed accordingly. Yet AI – particularly foundation models – complicates this framework.
The foundation models are characterized by features that are difficult to quantify, including their probabilistic and augmentative nature and interaction with complex socio-technical environments, where harms cannot be clearly predicted. As a result, traditional risk assessment approaches are unable to adequately account for their behavior or impact and can produce a false sense of confidence for regulators.
The legal and epistemic tension is immediately apparent. Law requires a level of certainty – but the nature of AI strongly challenges that very prerequisite. Yet the logic of law remains orthodox, producing an epistemic mismatch between the assumptions embedded in legal instruments and the realities of the technologies they seek to govern.
The EU AI Act’s treatment of “systemic risk” also reflects the influence of the contemporary AI Safety discourse. The very existence of a dedicated “Safety and Security” chapter in the GPAI Code of Practice signals an awareness of debates around the so-called “long-term risks” of advanced models. Terms like systemic risk echo concerns raised by AI Safety researchers: worries about uncontrollable systems, cascading failures, and potential large-scale harms. Yet, crucially, the Act stops short of engaging with the more fundamental concepts of this discourse – such as alignment, control, or corrigibility. Instead, systemic risk is invoked as if it were already a stable regulatory concept, when in reality it is still contested in both technical and governance circles.
The Act’s description of systemic risk, as provided in Art. 3(65) and Recital 110 highlight this conceptual ambiguity. The Act’s definition refers to risks stemming from the “high-impact capabilities of general-purpose AI models,” but both its origins and implications remain unclear. As presented in the Act, systemic risk seems to be rooted in the AI model’s technical attributes: “Systemic risks should be understood to increase with model capabilities and model reach, can arise along the entire lifecycle of the model, and are influenced by conditions of misuse, model reliability, model fairness and model security, the level of autonomy of the model, its access to tools, novel or combined modalities, release and distribution strategies, the potential to remove guardrails and other factors.”
The passage borrows the vocabulary of AI Safety, but without clarifying how these terms connect to actual deployment and socio-technical contexts. The causal relationship between a model’s “capabilities” and systemic risk is assumed rather than demonstrated. By framing systemic risk primarily as a technical property of models, the Act overlooks the crucial role of deployment environments, institutional oversight, and collective governance in shaping real-world harms. This matters because a regulation that treats systemic risk as an intrinsic property of models will default to technical compliance measures, while overlooking the institutional and societal conditions that actually determine whether risks materialize.
The bureaucratic trap of legal certainty and the need for anticipatory governance of emerging technologies
Max Weber’s analysis of bureaucracy helps to explain why the aforementioned mismatch between the assumptions embedded in legal instruments and the realities of the technologies was to be expected. Weber described bureaucracy as an “iron cage” of rationalization, reliant on formal rules, hierarchies, and categorical clarity. Bureaucracies require clear categorization, otherwise they cannot function effectively.
The EU AI Act’s precise definitions – such as those for “provider” (Art. 3(3)), “deployer” (Art. 3(4)), and especially “general-purpose AI model” (Art. 3(63)) – reflect this bureaucratic logic. Yet, as Weber warned, this form of rationality can lead to overly rigid and formalized patterns of thought. In treating AI categories as scientifically settled, the Act exemplifies legal formalism that may hinder adaptive governance. The bureaucratic need for clear rules at this stage essentially opposes the anticipated regulatory clarity and instead creates an epistemic gap between the law itself and the state of the art in the area it aims to regulate. For policymakers, the problem is not an academic one: rules that freeze categories too early risk locking Europe into an outdated conceptual framework that will be difficult to revise as AI research advances.
Thomas Kuhn’s theory of scientific revolutions offers further insight. Kuhn described “normal science” as puzzle-solving within “paradigms” – established frameworks that define what counts as a valid question or method. Paradigm shifts occur only when anomalies accumulate and existing frameworks collapse. Today, AI research is undergoing similar developments, with innovations like large language models disrupting prior paradigms. Legal systems, however, operate within their own paradigms, which prioritize stability and continuity. As such, they necessarily lag behind the rapidly evolving world of AI.
Kuhn observed that paradigm shifts are disruptive, unsettling established categories and methods. Law, by contrast, is conservative and resistant to epistemic upheaval. Thus, the scientific paradigm in flux collides with legal orthodoxy’s demand for stable definitions. Although terms like general-purpose AI and systemic risk, and many others, appear fixed within the EU AI Act, they remain unsettled, contested, and context-dependent in practice.
A revealing example comes from a recent talk at the University of Cambridge, where Professor Stuart Russell defined GPAI not as a present reality but as an aspirational concept – a model capable of quickly learning high-quality behavior in any task environment. His description aligns more closely with the notion of “Artificial General Intelligence” than with foundation models such as the GPT series. This diverges sharply from the EU AI Act’s framing, highlighting the epistemic gap between regulatory and scientific domains.
The lesson here is that the Act risks legislating yesterday’s paradigm into tomorrow’s world. Instead of anchoring regulation in fixed categories, policymakers need governance mechanisms that anticipate conceptual change and allow for iterative revision, relying on multidisciplinary monitoring bodies rather than static – and in this case problematic – definitions. Ambiguity in core concepts and definitions, the fragmented character, and an often unconvincing discourse reveal the limits of conventional regulatory logic when applied to emerging technologies. Neither the EU AI Act nor the GPAI Code of Practice was developed within an anticipatory governance framework, which would better accommodate AI’s continuously evolving, transformative nature.
The OECD’s work on anticipatory innovation governance illustrates how such frameworks can function: by combining foresight, experimentation, and adaptive regulation to prepare for multiple possible futures. Experiments in Finland, conducted in collaboration with the OECD and the European Commission, show that anticipatory innovation governance can be embedded directly into core policymaking processes such as budgeting, strategy, and regulatory design, rather than treated as a peripheral exercise. This approach stands in sharp contrast to the EU AI Act’s reliance on fixed categories and definitions: instead of legislating conceptual closure too early, it builds flexibility and iterative review into the very processes of governance. In the AI domain, the OECD’s paper Steering AI’s Future applies these anticipatory principles directly to questions of AI governance.
From this perspective, the delay in releasing the GPAI Code of Practice should not have been seen as a moment of conflict, but rather as an opportunity to consider a more appropriate framework for governing the emerging technologies – one that accepts uncertainty as the norm, relies on adaptive oversight, and treats categories as provisional rather than definitive.
AI Research
UMD Researchers Leverage AI to Enhance Confidence in HPV Vaccination

Human papillomavirus (HPV) vaccination represents a critical breakthrough in cancer prevention, yet its uptake among adolescents remains disappointingly low. Despite overwhelming evidence supporting the vaccine’s safety and efficacy against multiple types of cancer—including cervical, anal, and oropharyngeal cancers—only about 61% of teenagers aged 13 to 17 in the United States have received the recommended doses. Even more concerning are the even lower vaccination rates among younger children, starting at age nine, when the vaccine is first suggested. Addressing this paradox between scientific consensus and public hesitancy has become a focal point for an innovative research project spearheaded by communication expert Professor Xiaoli Nan at the University of Maryland (UMD).
The project’s core ambition involves harnessing artificial intelligence (AI) to transform the way vaccine information is communicated to parents, aiming to dismantle the barriers that fuel hesitancy. With a robust $2.8 million grant from the National Cancer Institute, part of the National Institutes of Health, Nan and her interdisciplinary team are developing a personalized, AI-driven chatbot. This technology is engineered not only to provide accurate health information but to adapt dynamically to parents’ individual concerns, beliefs, and communication preferences in real time—offering a tailored conversational experience that traditional brochures and websites simply cannot match.
HPV vaccination has long struggled with public misconceptions, stigma, and misinformation that discourage uptake. A significant factor behind the reluctance is tied to the vaccine’s association with a sexually transmitted infection, which prompts some parents to believe their children are too young for the vaccine or that vaccination might imply premature engagement with sexual activity. This misconception, alongside a lack of tailored communication strategies, has contributed to persistent disparities in vaccination rates. These disparities are especially pronounced among men, individuals with lower educational attainment, and those with limited access to healthcare, as Professor Cheryl Knott, a public health behavioral specialist at UMD, highlights.
Unlike generic informational campaigns, the AI chatbot leverages cutting-edge natural language processing (NLP) to simulate nuanced human dialogue. However, it does so without succumbing to the pitfalls of generative AI models, such as ChatGPT, which can sometimes produce inaccurate or misleading answers. Instead, the system draws on large language models to generate a comprehensive array of possible responses. These are then rigorously curated and vetted by domain experts before deployment, ensuring that the chatbot’s replies remain factual, reliable, and sensitive to users’ needs. When interacting live, the chatbot analyzes parents’ input in real time, selecting the most appropriate response from this trusted set, thereby balancing flexibility with accuracy.
This “middle ground” model, as described by Philip Resnik, an MPower Professor affiliated with UMD’s Department of Linguistics and Institute for Advanced Computer Studies, preserves the flexibility of conversational AI while instituting “guardrails” to maintain scientific integrity. The approach avoids the rigidity of scripted chatbots that deliver canned, predictable replies; simultaneously, it steers clear of the “wild west” environment of fully generative chatbots, where the lack of control can lead to misinformation. Instead, it offers an adaptive yet responsible communication tool, capable of engaging parents on their terms while preserving public health objectives.
The first phase of this ambitious experiment emphasizes iterative refinement of the chatbot via a user-centered design process. This involves collecting extensive feedback from parents, healthcare providers, and community stakeholders to optimize the chatbot’s effectiveness and cultural sensitivity. Once this foundational work is complete, the team plans to conduct two rigorous randomized controlled trials. The first trial will be conducted online with a nationally representative sample of U.S. parents, compare the chatbot’s impact against traditional CDC pamphlets, and measure differences in vaccine acceptance. The second trial will take place in clinical environments in Baltimore, including pediatric offices, to observe how the chatbot influences decision-making in real-world healthcare settings.
Min Qi Wang, a behavioral health professor participating in the project, emphasizes that “tailored, timely, and actionable communication” facilitated by AI signals a paradigm shift in public health strategies. This shift extends beyond HPV vaccination, as such advanced communication systems possess the adaptability to address other complex public health challenges. By delivering personalized guidance directly aligned with users’ expressed concerns, AI can foster a more inclusive health dialogue that values empathy and relevance, which traditional mass communication methods often lack.
Beyond increasing HPV vaccination rates, the research team envisions broader implications for public health infrastructure. In an era where misinformation can spread rapidly and fear often undermines scientific recommendations, AI-powered tools offer a scalable, responsive mechanism to disseminate trustworthy information quickly. During future pandemics or emergent health crises, such chatbots could serve as critical channels for delivering customized, real-time guidance to diverse populations, helping to flatten the curve of misinformation while respecting individual differences.
The integration of AI chatbots into health communication represents a fusion of technological innovation with behavioral science, opening new horizons for personalized medicine and health education. By engaging users empathetically and responsively, these systems can build trust and facilitate informed decision-making, critical components of successful public health interventions. Professor Nan highlights the profound potential of this marriage between AI and public health communication by posing the fundamental question: “Can we do a better job with public health communication—with speed, scale, and empathy?” Project outcomes thus far suggest an affirmative answer.
As the chatbot advances through its pilot phases and into clinical trials, the research team remains committed to maintaining a rigorous scientific approach, ensuring that the tool’s recommendations align with the highest standards of evidence-based medicine. This careful balance between innovation and reliability is essential to maximize public trust and the chatbot’s ultimate impact on vaccine uptake. Should these trials demonstrate efficacy, the model could serve as a blueprint for deploying AI-driven communication tools across various domains of health behavior change.
Moreover, the collaborative nature of this project—bringing together communication experts, behavioral scientists, linguists, and medical professionals—illustrates the importance of interdisciplinary efforts in addressing complex health challenges. Each field contributes unique insights: linguistic analysis enables nuanced conversation design, behavioral science guides motivation and persuasion strategies, and medical expertise ensures factual accuracy and clinical relevance. This holistic framework strengthens the chatbot’s ability to resonate with diverse parent populations and to overcome entrenched hesitancy.
In conclusion, while HPV vaccines represent a major advancement in cancer prevention, their potential remains underutilized due to deeply embedded hesitancy fueled by stigma and misinformation. Leveraging AI-driven, personalized communication stands as a promising strategy to bridge this gap. The University of Maryland’s innovative chatbot project underscores the use of responsible artificial intelligence to meet parents where they are, addressing their unique concerns with empathy and scientific rigor. This initiative not only aspires to improve HPV vaccine uptake but also to pave the way for AI’s transformative role in future public health communication efforts.
Subject of Research: Artificial intelligence-enhanced communication to improve HPV vaccine uptake among parents.
Article Title: Transforming Vaccine Communication: AI Chatbots Target HPV Vaccine Hesitancy in Parents
News Publication Date: Information not provided in the source content.
Web References:
https://sph.umd.edu/people/cheryl-knott
https://sph.umd.edu/people/min-qi-wang
Image Credits: Credit: University of Maryland (UMD)
Keywords: Vaccine research, Science communication
Tags: adolescent vaccination ratesAI-driven health communicationcancer prevention strategieschatbot technology in healthcareevidence-based vaccine educationHPV vaccination awarenessinnovative communication strategies for parentsNational Cancer Institute fundingovercoming vaccine hesitancyparental engagement in vaccinationpersonalized health informationUniversity of Maryland research
AI Research
EY-Parthenon practice unveils neurosymbolic AI capabilities to empower businesses to identify, predict and unlock revenue at scale | EY

Jeff Schumacher, architect behind the groundbreaking AI solution, to steer EY Growth Platforms.
Ernst & Young LLP (EY) announced the launch of EY Growth Platforms (EYGP), a disruptive artificial intelligence (AI) solution powered by neurosymbolic AI. By combining machine learning with logical reasoning, EYGP empowers organizations to uncover transformative growth opportunities and revolutionize their commercial models for profitability. The neurosymbolic AI workflows that power EY Growth Platforms consistently uncover hundred-million-dollar+ growth opportunities for global enterprises, with the potential to enhance revenue.
This represents a rapid development in enterprise technology—where generative AI and neurosymbolic AI combine to redefine how businesses create value. This convergence empowers enterprises to reimagine growth at impactful scale, producing outcomes that are traceable, trustworthy and statistically sound.
EYGP serves as a powerful accelerator for the innovative work at EY-Parthenon, helping clients with their most complex strategic opportunities to realize greater value and transform their business, by reimagining their business from the ground up—including building and scaling new corporate ventures, or executing high-stakes transactions.
“In today’s uncertain economic climate, leading companies aren’t just adapting—they’re taking control,” says Mitch Berlin, EY Americas Vice Chair, EY-Parthenon. “EY Growth Platforms gives our clients the predictive power and actionable foresight they need to confidently steer their revenue trajectory. EY Growth Platforms is a game changer, poised to become the backbone of enterprise growth.”
How EY Growth Platforms work
Neurosymbolic AI merges the statistical power of neural networks with the structured logic of symbolic reasoning, driving powerful pattern recognition to deliver predictions and decisions that are practical, actionable and grounded in real-world outcomes. EYGP harnesses this powerful technology to simulate real-time market scenarios and their potential impact, uncovering the most effective business strategies tailored to each client. It expands beyond the limits of generative AI, becoming a growth operating system for companies to tackle complex go-to-market challenges and unlock scalable revenue.
At the core of EYGP is a unified data and reasoning engine that ingests structured and unstructured data from internal systems, external signals, and deep EY experience and data sets. Developed over three years, this robust solution is already powering proprietary AI applications and intelligent workflows for EY-Parthenon clients across the consumer product goods, industrials and financial services sectors without the need for extensive data cleaning or digital transformation.
Use cases for EY Growth Platforms
With the ability to operate in complex high-stakes scenarios, EYGP is driving a measurable impact across industries such as:
- Financial services: In a tightly regulated industry, transparency and accountability are nonnegotiable. Neurosymbolic AI enhances underwriting, claims processing and compliance with transparency and rigor, validating that decisions are aligned with regulatory standards and optimized for customer outcomes.
- Consumer products: Whether powering real-time recommendations, adaptive interfaces or location-aware services, neurosymbolic AI drives hyperpersonalized experiences at a one-to-one level. By combining learned patterns with structured knowledge, it delivers precise, context-rich insights tailored to individual behavior, preferences and environments.
- Industrial products: Neurosymbolic AI helps industrial conglomerates optimize the entire value chain — from sourcing and production to distribution and service. By integrating structured domain knowledge with real-time operational data, it empowers leaders to make smarter decisions — from facility placement and supply routing to workforce allocation tailored to specific geographies and market-specific conditions.
The platform launch follows the appointment of Jeff Schumacher as the EYGP Leader for EY-Parthenon. Schumacher brings over 25 years of experience in business strategy, innovation and digital disruption, having helped establish over 100 early growth companies. He is the founder of neurosymbolic AI company, Growth Protocol, the technology that EY has exclusive licensing agreement with.
“Neurosymbolic AI is not another analytics tool, it’s a growth engine,” says Jeff Schumacher, EY Growth Platforms Leader, EY-Parthenon. “With EY Growth Platforms, we’re putting a dynamic, AI-powered operating system in the hands of leaders, giving them the ability to rewire how their companies make money. This isn’t incremental improvement; it’s a complete reset of the commercial model.”
EYGP is currently offered in North America, Europe, and Australia. For more information, visit ey.com/NeurosymbolicAI/
– ends –
About EY
EY is building a better working world by creating new value for clients, people, society and the planet, while building trust in capital markets.
Enabled by data, AI and advanced technology, EY teams help clients shape the future with confidence and develop answers for the most pressing issues of today and tomorrow.
EY teams work across a full spectrum of services in assurance, consulting, tax, strategy and transactions. Fueled by sector insights, a globally connected, multi-disciplinary network and diverse ecosystem partners, EY teams can provide services in more than 150 countries and territories.
All in to shape the future with confidence.
EY refers to the global organization, and may refer to one or more, of the member firms of Ernst & Young Global Limited, each of which is a separate legal entity. Ernst & Young Global Limited, a UK company limited by guarantee, does not provide services to clients. Information about how EY collects and uses personal data and a description of the rights individuals have under data protection legislation are available via ey.com/privacy. EY member firms do not practice law where prohibited by local laws. For more information about our organization, please visit ey.com.
This news release has been issued by EYGM Limited, a member of the global EY organization that also does not provide any services to clients.
AI Research
MFour Mobile Research Now MFour Data Research, Reflecting Traction in Validated Surveys & AI training data
Name highlights evolution from mobile survey pioneer to leading provider of validated, ethically sourced data powering insights in the AI era.
IRVINE, Calif., Sept. 10, 2025 /PRNewswire/ — MFour Mobile Research, Inc., a pioneer in mobile-based consumer survey research, today announced its new name: MFour Data Research, Inc.
The change reflects the company’s expanded focus on delivering both high-quality validated survey insights and quality consumer behavior data – including connected app, web, location, and purchase journeys – all connected (and anonymized) through a single consumer ID.
Founded in 2011, MFour originally led the industry by using smartphones to improve the quality and accuracy of survey data. Over the past decade, customer demand has grown for ethically sourced, first-party behavior data that can power deeper consumer journey insights and support the next generation of AI-driven decision-making.
“Our new name reflects the products and data we sell today and where our customers are headed,” said Chris St. Hilaire, CEO and Founder of MFour. “Mobile surveys were just the beginning. Today, we combine validated survey data with app, web, location, and purchase behaviors — sourced directly from opted-in consumers through our Fair Trade Data® model. That makes us uniquely positioned to deliver the trusted, transparent datasets companies need in an AI-driven world.”
MFour Data Research’s solutions are anchored by its 4.5-star Surveys On The Go® app, which generates billions of verified data points annually, and the MFour Studio™ platform, where brands and institutions access connected survey and behavior datasets. From Fortune 100 companies to disruptive startups, organizations rely on MFour to provide clarity, accuracy, and confidence in understanding consumer journeys.
SOURCE MFour Data Research
-
Business2 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms4 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy1 month ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi