AI Research
Introducing the Frontier Safety Framework

Our approach to analyzing and mitigating future risks posed by advanced AI models
Google DeepMind has consistently pushed the boundaries of AI, developing models that have transformed our understanding of what’s possible. We believe that AI technology on the horizon will provide society with invaluable tools to help tackle critical global challenges, such as climate change, drug discovery, and economic productivity. At the same time, we recognize that as we continue to advance the frontier of AI capabilities, these breakthroughs may eventually come with new risks beyond those posed by present-day models.
Today, we are introducing our Frontier Safety Framework — a set of protocols for proactively identifying future AI capabilities that could cause severe harm and putting in place mechanisms to detect and mitigate them. Our Framework focuses on severe risks resulting from powerful capabilities at the model level, such as exceptional agency or sophisticated cyber capabilities. It is designed to complement our alignment research, which trains models to act in accordance with human values and societal goals, and Google’s existing suite of AI responsibility and safety practices.
The Framework is exploratory and we expect it to evolve significantly as we learn from its implementation, deepen our understanding of AI risks and evaluations, and collaborate with industry, academia, and government. Even though these risks are beyond the reach of present-day models, we hope that implementing and improving the Framework will help us prepare to address them. We aim to have this initial framework fully implemented by early 2025.
The framework
The first version of the Framework announced today builds on our research on evaluating critical capabilities in frontier models, and follows the emerging approach of Responsible Capability Scaling. The Framework has three key components:
- Identifying capabilities a model may have with potential for severe harm. To do this, we research the paths through which a model could cause severe harm in high-risk domains, and then determine the minimal level of capabilities a model must have to play a role in causing such harm. We call these “Critical Capability Levels” (CCLs), and they guide our evaluation and mitigation approach.
- Evaluating our frontier models periodically to detect when they reach these Critical Capability Levels. To do this, we will develop suites of model evaluations, called “early warning evaluations,” that will alert us when a model is approaching a CCL, and run them frequently enough that we have notice before that threshold is reached.
- Applying a mitigation plan when a model passes our early warning evaluations. This should take into account the overall balance of benefits and risks, and the intended deployment contexts. These mitigations will focus primarily on security (preventing the exfiltration of models) and deployment (preventing misuse of critical capabilities).
Risk domains and mitigation levels
Our initial set of Critical Capability Levels is based on investigation of four domains: autonomy, biosecurity, cybersecurity, and machine learning research and development (R&D). Our initial research suggests the capabilities of future foundation models are most likely to pose severe risks in these domains.
On autonomy, cybersecurity, and biosecurity, our primary goal is to assess the degree to which threat actors could use a model with advanced capabilities to carry out harmful activities with severe consequences. For machine learning R&D, the focus is on whether models with such capabilities would enable the spread of models with other critical capabilities, or enable rapid and unmanageable escalation of AI capabilities. As we conduct further research into these and other risk domains, we expect these CCLs to evolve and for several CCLs at higher levels or in other risk domains to be added.
To allow us to tailor the strength of the mitigations to each CCL, we have also outlined a set of security and deployment mitigations. Higher level security mitigations result in greater protection against the exfiltration of model weights, and higher level deployment mitigations enable tighter management of critical capabilities. These measures, however, may also slow down the rate of innovation and reduce the broad accessibility of capabilities. Striking the optimal balance between mitigating risks and fostering access and innovation is paramount to the responsible development of AI. By weighing the overall benefits against the risks and taking into account the context of model development and deployment, we aim to ensure responsible AI progress that unlocks transformative potential while safeguarding against unintended consequences.
Investing in the science
The research underlying the Framework is nascent and progressing quickly. We have invested significantly in our Frontier Safety Team, which coordinated the cross-functional effort behind our Framework. Their remit is to progress the science of frontier risk assessment, and refine our Framework based on our improved knowledge.
The team developed an evaluation suite to assess risks from critical capabilities, particularly emphasising autonomous LLM agents, and road-tested it on our state of the art models. Their recent paper describing these evaluations also explores mechanisms that could form a future “early warning system”. It describes technical approaches for assessing how close a model is to success at a task it currently fails to do, and also includes predictions about future capabilities from a team of expert forecasters.
Staying true to our AI Principles
We will review and evolve the Framework periodically. In particular, as we pilot the Framework and deepen our understanding of risk domains, CCLs, and deployment contexts, we will continue our work in calibrating specific mitigations to CCLs.
At the heart of our work are Google’s AI Principles, which commit us to pursuing widespread benefit while mitigating risks. As our systems improve and their capabilities increase, measures like the Frontier Safety Framework will ensure our practices continue to meet these commitments.
We look forward to working with others across industry, academia, and government to develop and refine the Framework. We hope that sharing our approaches will facilitate work with others to agree on standards and best practices for evaluating the safety of future generations of AI models.
AI Research
AI to reshape India’s roads? Artificial intelligence can take the wheel to fix highways before they break, ETInfra

In India, a pothole is rarely just a pothole. It is a metaphor, a mood and sometimes, a meme. It is the reason your cab driver mutters about karma and your startup founder misses a pitch meeting because the expressway has turned into a swimming pool. But what if roads could detect their own distress, predict failures before they happen, and even suggest how to fix them?
That is not science-fiction but the emerging reality of AI-powered infrastructure.
According to KPMG’s 2025 report AI-powered road infrastructure transformation- Roads 2047, artificial intelligence is slowly reshaping how India builds, maintains, and governs its roads. From digital twins that simulate entire highways to predictive algorithms that flag out structural fatigue, the country’s infrastructure is beginning to show signs of cognition.
From concrete to cognition
India’s road network spans over 6.3 million kilometers – second only to the United States. As per KPMG, AI is now being positioned not just as a tool but as a transformational layer. Technologies like Geographic Information System (GIS), Building Informational Modelling (BIM) and sensor fusion are enabling digital twins – virtual replicas of physical assets that allow engineers to simulate stress, traffic and weather impact in real time. The National Highway Authority of India (NHAI) has already integrated AI into its Project Management Information System (PMIS), using machine learning to audit construction quality and flag anomalies.
Autonomous infrastructure in action
Across urban India, infrastructure is beginning to self-monitor. Pune’s Intelligent Traffic Management System (ITMS) and Bengaluru’s adaptive traffic control systems are early examples of AI-driven urban mobility.
Meanwhile, AI-MC, launched by the Ministry of Road Transport and Highways (MoRTH), uses GPS-enabled compactors and drone-based pavement surveys to optimise road construction.
Beyond cities, state-level initiatives are also embracing AI for infrastructure monitoring. As reported by ETInfra earlier, Bihar’s State Bridge Management & Maintenance Policy, 2025 employs AI and machine learning for digital audits of bridges and culverts. Using sensors, drones, and 3D digital twins, the state has surveyed over 12,000 culverts and 743 bridges, identifying damaged structures for repair or reconstruction. IIT Patna and Delhi have been engaged for third-party audits, showing how AI can extend beyond roads to critical bridge infrastructure in both urban and rural contexts.
While these examples demonstrate the potential of AI-powered maintenance, challenges remain. Predictive maintenance, KPMG notes, could reduce lifecycle costs by up to 30 per cent and improve asset longevity, but much of rural India—nearly 70 per cent of the network—still relies on manual inspections and paper-based reporting.
Governance and the algorithm
India’s road safety crisis is staggering: over 1.5 lakh deaths annually. AI could be a game-changer. KPMG estimates that intelligent systems can reduce emergency response times by 60 per cent, and improve traffic efficiency by 30 per cent. AI also supports ESG goals— enabling carbon modeling, EV corridor planning, and sustainable design.
But technology alone won’t fix systemic gaps. The promise of AI hinges on institutional readiness – spanning urban planning, enforcement, and civic engagement.
While NITI Aayog has outlined a national AI strategy, and MoRTH has initiated digital reforms, state-level adoption remains fragmented. Some states have set up AI cells within their PWDs; others lack the technical capacity or policy mandate.
KPMG calls for a unified governance framework — one that enables interoperability, safeguards data, and fosters public-private partnerships. Without it, India risks building smart systems on shaky foundations.
As India looks towards 2047, the road ahead is both digital and political. And if AI can help us listen to our roads, perhaps we’ll finally learn to fix them before they speak in potholes.
AI Research
Mistral AI Nears Close of Funding Round Lifting Valuation to $14B

Artificial intelligence (AI) startup Mistral AI is reportedly nearing the close of a funding round in which it would raise €2 billion (about $2.3 billion) and be valued at €12 billion (about $14 billion).
AI Research
PPS Weighs Artificial Intelligence Policy

Portland Public Schools folded some guidance on artificial intelligence into its district technology policy for students and staff over the summer, though some district officials say the work is far from complete.
The guidelines permit certain district-approved AI tools “to help with administrative tasks, lesson planning, and personalized learning” but require staff to review AI-generated content, check accuracy, and take personal responsibility for any content generated.
The new policy also warns against inputting personal student information into tools, and encourages users to think about inherent bias within such systems. But it’s still a far cry from a specific AI policy, which would have to go through the Portland School Board.
Part of the reason is because AI is such an “active landscape,” says Liz Large, a contracted legal adviser for the district. “The policymaking process as it should is deliberative and takes time,” Large says. “This was the first shot at it…there’s a lot of work [to do].”
PPS, like many school districts nationwide, is continuing to explore how to fold artificial intelligence into learning, but not without controversy. AsThe Oregonian reported in August, the district is entering a partnership with Lumi Story AI, a chatbot that helps older students craft their own stories with a focus on comics and graphic novels (the pilot is offered at some middle and high schools).
There’s also concern from the Portland Association of Teachers. “PAT believes students learn best from humans, instead of AI,” PAT president Angela Bonilla said in an Aug. 26 video. “PAT believes that students deserve to learn the truth from humans and adults they trust and care about.”
Willamette Week’s reporting has concrete impacts that change laws, force action from civic leaders, and drive compromised politicians from public office.
Help us dig deeper.
-
Business5 days ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms3 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy1 month ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi
-
Education2 months ago
AERDF highlights the latest PreK-12 discoveries and inventions