Connect with us

AI Research

Mission under way to save ‘world’s most beautiful’ snails

Published

on


Victoria Gill

Science correspondent, BBC News

Bernardo Reyes-Tur The image is a close-up of a snail on a branch in the forest. The snail is strikingly colourful, with a bright, vibrant red shell with black and white coiling bands and a yellow centre. Bernardo Reyes-Tur

A Polymita snail in its native forest habitat in Eastern Cuba

Researchers have embarked on a mission to save what some consider to be the world’s most beautiful snails, and also unlock their biological secrets.

Endangered Polymita tree snails, which are disappearing from their native forest habitats in Eastern Cuba, have vibrant, colourful and extravagantly patterned shells.

Unfortunately, those shells are desirable for collectors, and conservation experts say the shell trade is pushing the snails towards extinction.

Biologists in Cuba, and specialists at the University of Nottingham in the UK, have now teamed up with the goal of saving the six known species of Polymita.

Angus Davison The arm of a person, the rest of whom is out of shot, is held out with about 10 colourful, beaded necklaces draped over it. When you look more closely, some of these beads are actually colourful snail shells. Some of these are endangered Polymita snail shells . Angus Davison

The shells are used to make colourful jewellery

The most endangered of those is Polymita sulphurosa, which is lime green with blue flame patterns around its coils and bright orange and yellow bands across its shell.

But all the Polymita species are strikingly bright and colourful, which is an evolutionary mystery in itself.

“One of the reasons I’m interested in these snails is because they’re so beautiful,” explained evolutionary geneticist and mollusc expert Prof Angus Davison from the University of Nottingham.

The irony, he said, is that this is the reason the snails are so threatened.

“Their beauty attracts people who collect and trade shells. So the very thing that makes them different and interesting to me as a scientist is, unfortunately, what’s endangering them as well.”

Bernardo Reyes-Tur Two snails - one vibrant red and yellow and the other white and blue - face each other on a branch. Bernardo Reyes-Tur

Searching online with Prof Davison, we found several platforms where sellers, based in the UK, were offering Polymita shells for sale. On one site a collection of seven shells was being advertised for £160.

“For some of these species, we know they’re really quite endangered. So it wouldn’t take much [if] someone collects them in Cuba and trades them, to cause some species to go extinct.”

Shells are bought and sold as decorative objects, but every empty shell was once a living animal.

Bernardo Reyes-Tur Eight colourful, striped Polymita snails sit on a long green leaf. Scientists are collecting them in the wild for captive breeding and research. There is a tupperware box beneath the leaf, which is the container that the snails will be transported in. Bernardo Reyes-Tur

The team gathered some of the snails to bring into captivity for breeding and research

While there are international rules to protect Polymita snails, they are difficult to enforce. It is illegal – under the Convention on International Trade in Endangered Species – to take the snails or their shells out of Cuba without a permit. But it is legal to sell the shells elsewhere.

Prof Davison says that, with pressures like climate change and forest loss affecting their natural habitat in Cuba, “you can easily imagine where people collecting shells would tip a population over into local extinction”.

Angus Davison A smiling man in a navy blue T-shirt holds a brightly coloured snail towards the cameraAngus Davison

Prof Angus Davison with a Polymita snail on his finger

To try to prevent this, Prof Davison is working closely with Prof Bernardo Reyes-Tur at the Universidad de Oriente, Santiago de Cuba, who is a conservation biologist.

The aim of this international project is to better understand how the snails evolved and to provide information that will help conservation.

Prof Reyes-Tur’s part of the endeavour is perhaps the most challenging: Working with unreliable power supplies and in a hot climate, he has brought Polymita snails into his own home for captive breeding.

“They have not bred yet, but they’re doing well,” he told us on a video call.

“It’s challenging though – we have blackouts all the time.”

Bernardo Reyes-Tur The image shows a smiling man with glasses on. He is holding towards the camera the lid from a large tupperware box, which has six colourful Polymita snails sitting on it. Bernardo Reyes-Tur

Conservation scientist Prof Bernardo Reyes-Tur at his home in Eastern Cuba with some of the snails he is rearing in captivity

Meanwhile, at the well-equipped labs at the University of Nottingham, genetic research is being carried out.

Here, Prof Davison and his team can keep tiny samples of snail tissue in cryogenic freezers to preserve them. They are able to use that material to read the animals’ genome – the biological set of coded instructions that makes each snail what it is.

The team aims to use this information to confirm how many species there are, how they are related to each other and what part of their genetic code gives them their extraordinary, unique colour patterns.

Angus Davison A close-up of a bright green snail sitting on some brown woody material. The snail is Polymita Sulphurosa - the most endangered of the six known Polymita snail species. It has light blue-grey, flame-like patterns on its coils and a band of bright red across the part of its shell that is closest to its head.  Angus Davison

Polymita sulphurosa is critically endangered

The hope is that they can reveal those biological secrets before these colourful creatures are bought and sold into extinction.

“Eastern Cuba is the the only place in the world where these snails are found,” Prof Davison told BBC News.

“That’s where the expertise is – where the people who know these snails, love them and understand them, live and work.

“We hope we can use the genetic information that we can bring to contribute to their conservation.”



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

A False Confidence in the EU AI Act: Epistemic Gaps and Bureaucratic Traps

Published

on


On July 10, 2025, the European Commission released the final draft of the General-Purpose Artificial Intelligence (GPAI) Code of Practice, a “code designed to help industry comply with the AI Act’s rules.” The Code has been under development since October 2024, when the iterative drafting process began after a kick-off plenary in September 2024. The Commission had planned to release the final draft by May 2, 2025, and the subsequent delay has sparked widespread speculation – ranging from concerns about industry lobbying to deeper, more ideological tensions between proponents of innovation and regulation.

However, beyond these narratives, a more fundamental issue emerges: an epistemic and conceptual disconnect at the core of the EU Artificial Intelligence Act (EU AI Act), particularly in its approach to “general-purpose AI” (GPAI). The current version, which includes three chapters covering “Transparency,” “Copyright,” and “Safety and Security,” does not address these core problems.

According to Art. 3(63) of the EU AI Act, a “general-purpose AI model” is:

“an AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are placed on the market”

What the Act refers to here aligns with what the AI research community refers to as a foundation model – a term that, while not perfect, is widely used to describe large-scale models trained on broad datasets to support multiple tasks. Examples of such foundation models include OpenAI’s ChatGPT series, Microsoft’s Magma, Google’s Gemini, and BERT. These models act as bases that can be fine-tuned or adapted for particular use cases.

Crucially, the term “general-purpose AI” (GPAI) did not emerge from within the AI research community. It is a legal construct introduced by the EU AI Act to retroactively define certain types of AI systems. Prior to this, ‘GPAI’ had little to no presence in scholarly discourse. In this sense, the Act not only assigned a regulatory meaning to the term, but also effectively created a new category – one that risks distorting how such systems are actually understood and developed in practice.

The key concepts embedded in the Act reflect a certain epistemic confidence. However, as is the case with GPAI, the terminology does not emerge organically from within the AI field. Instead, GPAI, as defined in the Act, represents an external attempt to impose legal clarity onto a domain that is continuing to evolve. This definitional approach offers a false sense of epistemic certainty and stability, implying that AI systems can be easily classified, analyzed, and understood.

By creating a regulatory label with a largely fixed meaning, the Act constructs a category that may never have been epistemologically coherent to begin with. In doing so, it imposes a rigid legal structure onto a technological landscape characterized by ongoing transformation and ontological and epistemological uncertainty.

The limits of a risk-based framework

The EU AI Act takes a risk-based regulatory approach. Article 3(2) defines risk as “the combination of the probability of an occurrence of harm and the severity of that harm.” This definition draws from classical legal and actuarial traditions, where it is assumed that harms are foreseeable, probabilities can be reasonably assigned, and risks can be assessed accordingly. Yet AI – particularly foundation models – complicates this framework.

The foundation models are characterized by features that are difficult to quantify, including their probabilistic and augmentative nature and interaction with complex socio-technical environments, where harms cannot be clearly predicted. As a result, traditional risk assessment approaches are unable to adequately account for their behavior or impact and can produce a false sense of confidence for regulators.

The legal and epistemic tension is immediately apparent. Law requires a level of certainty – but the nature of AI strongly challenges that very prerequisite. Yet the logic of law remains orthodox, producing an epistemic mismatch between the assumptions embedded in legal instruments and the realities of the technologies they seek to govern.

The EU AI Act’s treatment of “systemic risk” also reflects the influence of the contemporary AI Safety discourse. The very existence of a dedicated “Safety and Security” chapter in the GPAI Code of Practice signals an awareness of debates around the so-called “long-term risks” of advanced models. Terms like systemic risk echo concerns raised by AI Safety researchers: worries about uncontrollable systems, cascading failures, and potential large-scale harms. Yet, crucially, the Act stops short of engaging with the more fundamental concepts of this discourse – such as alignment, control, or corrigibility. Instead, systemic risk is invoked as if it were already a stable regulatory concept, when in reality it is still contested in both technical and governance circles.

The Act’s description of systemic risk, as provided in Art. 3(65) and Recital 110 highlight this conceptual ambiguity. The Act’s definition refers to risks stemming from the “high-impact capabilities of general-purpose AI models,” but both its origins and implications remain unclear. As presented in the Act, systemic risk seems to be rooted in the AI model’s technical attributes: “Systemic risks should be understood to increase with model capabilities and model reach, can arise along the entire lifecycle of the model, and are influenced by conditions of misuse, model reliability, model fairness and model security, the level of autonomy of the model, its access to tools, novel or combined modalities, release and distribution strategies, the potential to remove guardrails and other factors.”

The passage borrows the vocabulary of AI Safety, but without clarifying how these terms connect to actual deployment and socio-technical contexts. The causal relationship between a model’s “capabilities” and systemic risk is assumed rather than demonstrated. By framing systemic risk primarily as a technical property of models, the Act overlooks the crucial role of deployment environments, institutional oversight, and collective governance in shaping real-world harms. This matters because a regulation that treats systemic risk as an intrinsic property of models will default to technical compliance measures, while overlooking the institutional and societal conditions that actually determine whether risks materialize.

Max Weber’s analysis of bureaucracy helps to explain why the aforementioned mismatch between the assumptions embedded in legal instruments and the realities of the technologies was to be expected. Weber described bureaucracy as an “iron cage” of rationalization, reliant on formal rules, hierarchies, and categorical clarity. Bureaucracies require clear categorization, otherwise they cannot function effectively.

The EU AI Act’s precise definitions – such as those for “provider” (Art. 3(3)), “deployer” (Art. 3(4)), and especially “general-purpose AI model” (Art. 3(63)) – reflect this bureaucratic logic. Yet, as Weber warned, this form of rationality can lead to overly rigid and formalized patterns of thought. In treating AI categories as scientifically settled, the Act exemplifies legal formalism that may hinder adaptive governance. The bureaucratic need for clear rules at this stage essentially opposes the anticipated regulatory clarity and instead creates an epistemic gap between the law itself and the state of the art in the area it aims to regulate. For policymakers, the problem is not an academic one: rules that freeze categories too early risk locking Europe into an outdated conceptual framework that will be difficult to revise as AI research advances.

Thomas Kuhn’s theory of scientific revolutions offers further insight. Kuhn described “normal science” as puzzle-solving within “paradigms” – established frameworks that define what counts as a valid question or method. Paradigm shifts occur only when anomalies accumulate and existing frameworks collapse. Today, AI research is undergoing similar developments, with innovations like large language models disrupting prior paradigms. Legal systems, however, operate within their own paradigms, which prioritize stability and continuity. As such, they necessarily lag behind the rapidly evolving world of AI.

Kuhn observed that paradigm shifts are disruptive, unsettling established categories and methods. Law, by contrast, is conservative and resistant to epistemic upheaval. Thus, the scientific paradigm in flux collides with legal orthodoxy’s demand for stable definitions. Although terms like general-purpose AI and systemic risk, and many others, appear fixed within the EU AI Act, they remain unsettled, contested, and context-dependent in practice.

A revealing example comes from a recent talk at the University of Cambridge, where Professor Stuart Russell defined GPAI not as a present reality but as an aspirational concept – a model capable of quickly learning high-quality behavior in any task environment. His description aligns more closely with the notion of “Artificial General Intelligence” than with foundation models such as the GPT series. This diverges sharply from the EU AI Act’s framing, highlighting the epistemic gap between regulatory and scientific domains.

The lesson here is that the Act risks legislating yesterday’s paradigm into tomorrow’s world. Instead of anchoring regulation in fixed categories, policymakers need governance mechanisms that anticipate conceptual change and allow for iterative revision, relying on multidisciplinary monitoring bodies rather than static – and in this case problematic – definitions. Ambiguity in core concepts and definitions, the fragmented character, and an often unconvincing discourse reveal the limits of conventional regulatory logic when applied to emerging technologies. Neither the EU AI Act nor the GPAI Code of Practice was developed within an anticipatory governance framework, which would better accommodate AI’s continuously evolving, transformative nature.

The OECD’s work on anticipatory innovation governance illustrates how such frameworks can function: by combining foresight, experimentation, and adaptive regulation to prepare for multiple possible futures. Experiments in Finland, conducted in collaboration with the OECD and the European Commission, show that anticipatory innovation governance can be embedded directly into core policymaking processes such as budgeting, strategy, and regulatory design, rather than treated as a peripheral exercise. This approach stands in sharp contrast to the EU AI Act’s reliance on fixed categories and definitions: instead of legislating conceptual closure too early, it builds flexibility and iterative review into the very processes of governance. In the AI domain, the OECD’s paper Steering AI’s Future applies these anticipatory principles directly to questions of AI governance.

From this perspective, the delay in releasing the GPAI Code of Practice should not have been seen as a moment of conflict, but rather as an opportunity to consider a more appropriate framework for governing the emerging technologies – one that accepts uncertainty as the norm, relies on adaptive oversight, and treats categories as provisional rather than definitive.



Source link

Continue Reading

AI Research

ServiceNow Zurich release introduces agentic AI to the platform

Published

on


The user can even ask an agent, in natural language, to make changes or additions to a playbook if, for example, a step in a process is missing.

Despite these features, Moor Insights’ Kramer said that Zurich’s success will all come down to execution. “Zurich shows ServiceNow moving in the right direction: less dashboard fatigue, more actionable insights, and better integration with external data,” he said. “But the real test will be execution. If the AI summaries are accurate, if the integrations stay reliable, and if the UX actually reduces complexity, customers will see value. If not, it risks being another release that looks good in demos but frustrates in practice.”

He added, “Competitors like Microsoft, SAP and Salesforce are pushing similar ideas, so customers will have options. ServiceNow needs to prove that Zurich isn’t just riding the AI wave but actually making daily work smoother.”



Source link

Continue Reading

AI Research

2025 State of AI Cost Management Research Finds 85% of Companies Miss AI Forecasts by >10%

Published

on

By


Despite rapid adoption, most enterprises lack visibility, forecasting accuracy, and margin control around AI investments. Hidden infrastructure costs are eroding enterprise profitability, according to newly published survey data.

AUSTIN, Texas, Sept. 10, 2025 /PRNewswire/ — As enterprises accelerate investments in AI infrastructure, a new report reveals a troubling financial reality: most organizations can’t forecast what they’re spending, or control how AI costs impact margins. According to the 2025 State of AI Cost Management, 80% of enterprises miss their AI infrastructure forecasts by more than 25%, and 84% report significant gross margin erosion tied to AI workloads.

The report, published by Benchmarkit in partnership with cost governance platform Mavvrik, reveals how AI adoption, across large language models (LLMs), GPU-based compute, and AI-native services, is outpacing cost governance. Most companies lack the visibility, attribution, and forecasting precision to understand where costs come from or how they affect margins.

“These numbers should rattle every finance leader. AI is no longer just experimental – it’s hitting gross margins, and most companies can’t even predict the impact,” said Ray Rike, CEO of Benchmarkit. “Without financial governance, you’re not scaling AI. You’re gambling with profitability.”

Top Findings from the 2025 State of AI Cost Management Report include:

AI costs are crushing enterprise margins

  • 84% of companies see 6%+ gross margin erosion due to AI infrastructure costs
  • 26% report margin impact of 16% or higher

The great AI repatriation has begun

  • 67% are actively planning to repatriate AI workloads; another 19% are evaluating
  • 61% already run hybrid AI infrastructure (public + private)
  • Only 35% include on-prem AI costs in reporting, leaving major blind spots

Hidden cost surprises come from unexpected places

  • Data platforms top source of unexpected AI spend (56%); LLMs rank 5th
  • Network access costs is the second-largest cost surprise (52%)

AI forecasting is fundamentally broken

  • 80% miss AI forecasts by 25%+
  • 24% are off by 50% or more
  • Only 15% forecast AI costs within 10% margin of error

Visibility gaps are stalling governance

  • Lack of visibility is the #1 challenge in managing AI infrastructure costs
  • 94% say they track costs, but only 34% have mature cost management
  • Companies charging for AI show 2x greater cost maturity in attribution and cost discipline

Access the full report: The full report details how automation, cost attribution methods, and cloud repatriation strategies factor into AI cost discipline. To view the analysis, please visit: https://www.mavvrik.ai/state-of-ai-cost-governance-report/

“AI is blowing up the assumptions baked into budgets. What used to be predictable, is now elastic and expensive,” said Sundeep Goel, CEO of Mavvrik. “This shift doesn’t just affect IT, it’s reshaping cost models, margin structures, and how companies scale. Enterprises are racing to build with AI, but when most can’t explain the bill, it’s no longer innovation, it’s risk.”

Why It Matters

AI isn’t just a technology challenge, it’s a financial one. From LLM APIs to GPU usage and data movement, infrastructure costs are scaling faster than most companies can track them. Without clear attribution across cloud and on-prem environments, leaders are making pricing, packaging, and investment decisions in the dark.

With AI spend becoming a significant line in COGS and gross margin targets under pressure, CFOs should be sounding the alarm. Yet most finance teams haven’t prioritized governance.

About the State of AI Cost Management
The 2025 State of AI Cost Management report is based on survey results from 372 enterprise organizations across diverse industries and revenue tiers. It measures cost governance maturity, spanning forecast accuracy, infrastructure mix (cloud vs. on–prem), attribution capability, and gross margin impact. https://www.mavvrik.ai/.

About Mavvrik
Mavvrik is the financial control center for modern IT. By embedding financial governance at the source of every cost signal, Mavvrik provides enterprises with complete visibility and control across cloud, AI, SaaS, and on-prem infrastructure. Built for CFOs, FinOps, and IT leaders, Mavvrik eliminates financial blind spots and transforms IT costs into strategic investments. With real-time cost tracking, automated chargebacks, and predictive budget controls, Mavvrik helps enterprises reduce waste, govern AI and hybrid cloud spend, and maintain financial precision at scale. Visit www.mavvrik.ai to learn more.

Media Contact:
Rick Medeiros
510-556-8517
[email protected] 

SOURCE Mavvrik



Source link

Continue Reading

Trending