AI Insights
Wielding Artificial Intelligence, the National Labs Take a Stab at Disaster Resilience | Article

Key Takeaways:
- The U.S. Department of Energy’s National Laboratories are leveraging artificial intelligence (AI) to support innovations in natural disaster and extreme weather forecasting.
- Tools like Argonne National Laboratory’s ADDA and Pacific Northwest National Laboratory’s ChatGrid and RADR are identifying flood risks to individual neighborhoods, tracking wildfire boundaries and burn severity, and calculating risks to transmission lines in a winter storm and heat wave alike.
- State governments, city planners, utility companies, and emergency agencies alike can access and apply the data from these tools to act quickly to protect people and infrastructure in the face of extreme weather.
From hurricanes and wildfires to flooding and extreme heat, billion-dollar disasters are on the rise in the United States, with average annual losses from extreme weather events totaling $149.3 billion between 2020 and 2024. As disasters and their price tags increase, predictive planning and adaptation tools have become increasingly critical to resilience. The U.S. Chamber of Commerce Foundation estimates that every dollar invested in community disaster preparedness can save $13 in damages, cleanup costs, and economic impacts down the line.
At the U.S. Department of Energy’s National Laboratories, artificial intelligence (AI) is supporting innovations in advanced predictive modelling crucial to disaster forecasting, preparedness, and risk mitigation. Among the labs at the forefront of AI-driven climate resilience are the Argonne National Laboratory and the Pacific Northwest National Laboratory.
Community-Level Disaster Prediction at the Argonne National Laboratory
In 2023, researchers at the Argonne National Laboratory (ANL) in Lemont, Illinois, unveiled the Argonne Downscaled Data Archive (ADDA) to help fill a crucial gap in understanding extreme weather risk. ADDA uses artificial intelligence and advanced computing to predict the risk of wildfires, drought, extreme heat and cold, hurricanes, rainfall, flooding, and sea level rise at the national, regional, city, and community levels.
Aerial view of Argonne National Laboratory in Illinois. Credit: Argonne National Laboratory via Flickr |
Unlike global climate models, which can only calculate risk for resolutions of 62 to 124 miles, ADDA can make risk projections for areas spanning just 2.5 miles (4 kilometers). The data behind these predictions are also particularly comprehensive—both future-looking and retrospective, based on 20 years of historic weather data and a wide range of past and future global greenhouse gas emissions scenarios. Most importantly, ADDA’s data is accessible, translating and condensing massive datasets into multiple formats of information that are usable by utilities, governments, and the public. They can also access the information through ANL’s Climate Risk and Resilience (ClimRR) portal.
The latest version of ADDA, known as ADDA v2, ups its climate resilience game in two ways. First, the tool is now able to provide updates by the hour, offering the type of real-time data needed to keep people and critical infrastructure safe during emergencies. Second, ADDA v2 accounts for previously underrepresented areas by expanding its coverage both outward and inward, providing data for broader North America, from Alaska to the Caribbean islands, and using a spatial resolution of 2.5 miles (the earlier version of ADDA—ADDA v1—worked at a resolution of 7.5 miles).
This type of information has crucial real-world applications for state governments, city planners, utility companies, and emergency agencies. The governments of California, Texas, and Portland, Maine, used ADDA v1 to identify risks to their critical infrastructure as part of their respective regional resiliency assessment programs. Commonwealth Edison, Illinois’s largest electric utility, used ADDA v1 to identify the impacts of future extreme heat and wind gusts on its grid infrastructure, while the New York Power Authority used the tool to assess the impact of extreme rainfall and flooding on its power plants and the impact of extreme heat on its transmission lines. The Department of Energy applied ADDA data to weigh how climate change will impact soil and groundwater at 118 contaminated “legacy sites” across the country. It has also been used to assess risk from hurricanes, wildfires, droughts, and extreme cold.
Bolstering Grid Resilience with the Pacific Northwest National Laboratory’s ChatGrid
The U.S. electric grid comprises thousands of power lines, substations, and control centers that work together to keep electricity flowing to our homes, schools, and businesses. Every day, grid operators have to make quick decisions when problems arise—from storms and equipment damage to sudden spikes in electricity use. To identify the issue and its solution, operators usually sort through massive amounts of technical data, which can be time intensive, slowing down response efforts.
Aerial view of Pacific Northwest National Laboratory in Washington State. Credit: Pacific Northwest National Laboratory via Flickr |
To help accelerate this process, researchers at Pacific Northwest National Laboratory (PNNL) created a tool called ChatGrid, which was made available to the public on Github in 2024. ChatGrid answers questions about the power grid in real time, using the same kind of advanced language model used by tools like ChatGPT. This means users can ask questions in plain English and get clear and immediate answers. For example, a user might ask, “How much power is Wind Generator A producing in the northwest?” ChatGrid will then create an easy-to-follow map that identifies electricity levels, voltage, and how power is flowing through different parts of the grid.
Rather than relying on live grid data, which is often inaccessible, ChatGrid uses simulated data from a platform called the Exascale Grid Optimization model, which mimics how the nation’s power grid behaves under different conditions. This allows operators and planners to explore “what-if” scenarios—for example, how a major storm or outage might affect the system. As Chris Oehmen, one of the lead researchers on the project, puts it, “if you have a hurricane coming, you need to know where to send trucks and equipment right away. You don’t have a week.”
By making complex grid information easier to access, ChatGrid helps utility operators and planners make smart decisions more quickly. As the tool continues to improve, it has the potential to play a major role in keeping the U.S. power system reliable—especially in the face of extreme weather and growing energy demand.
Rapid Analytics for Disaster Response at Pacific Northwest National Laboratory
The Rapid Analytics for Disaster Response (RADR) system is a disaster assessment tool developed by PNNL in 2014 to illustrate the impact of natural disasters on infrastructure. When a wildfire, flood, hurricane, or earthquake hits, RADR pulls together satellite imagery, AI, and cloud computing to rapidly depict what is happening on the ground.
RADR takes large volumes of high-resolution, open-access satellite images and processes them using machine learning models that are trained to detect damage and risks to infrastructure. Within minutes of receiving images, RADR can identify which areas have been affected and where critical infrastructure might be in danger. This allows planners and emergency responders to move quickly, without waiting for lengthy manual assessments.
In response to the increasing frequency and reach of wildfires, PNNL created a specialized version of their system called RADR-Fire. Fully automated, cloud-based, and AI-driven, RADR-Fire maps wildfire boundaries, tracks burn severity, and identifies how close fires are to key infrastructure like power lines, roads, and utilities. The result? Better access to the information emergency managers need to respond to wildfires.
RADR is already being used by the Department of Energy, the Federal Emergency Management Agency, the National Interagency Fire Center, the U.S. Geological Survey, utilities, and the private sector. These users rely on RADR’s ability to quickly turn raw satellite data into clear, actionable visualizations. For federal agencies, RADR supports faster disaster response and recovery planning. For utilities and private industry, it helps teams make decisions to protect critical infrastructure and reduce service disruptions.
By automating what would otherwise be a slow and resource-heavy process, RADR is helping decision-makers get ahead of fast-moving disasters and improve how they respond to and manage infrastructure risks.
AI-driven disaster prediction and preparedness tools developed by national labs are revolutionizing how we anticipate and respond to extreme weather events. These technologies can save lives, reduce economic losses, and strengthen community resilience.
Authors: Raneem Iftekhar and Nicole Pouy
AI Insights
Lomas: Artificial Intelligence in mining – An illusion or a revolution? – Elko Daily Free Press
AI Insights
FDA plans advisory committee meeting on AI mental health devices

The Food and Drug Administration will convene experts to discuss challenges around regulating mental health products that use artificial intelligence, as a growing number of companies release chatbots powered by large language models whose output can be unpredictable.
The move suggests the agency may soon tighten its focus on such tools.
The Nov. 6 meeting of the FDA’s Digital Health Advisory Committee (DHAC) will focus on “Generative Artificial Intelligence-Enabled Digital Mental Health Medical Devices,” according to a notice published Thursday in the Federal Register. The notice says newly released mental health products using AI pose “novel risks and, as mental health devices continue to evolve in complexity, regulatory approaches ideally will also evolve to accommodate these novel challenges.”
This article is exclusive to STAT+ subscribers
Unlock this article — and get additional analysis of the technologies disrupting health care — by subscribing to STAT+.
Already have an account? Log in
AI Insights
Inside Apple’s Artificial Intelligence Strategy

Apple’s artificial intelligence strategy has become something of a paradox: A company famed for redefining consumer technology is now seen as trailing behind in the generative AI boom. Siri, hyped for years as a next-generation personal assistant, falls short of latecomers like Google Assistant and ChatGPT in intelligence and contextual awareness. And the recent debut of the iPhone 17 barely mentioned Apple Intelligence, its AI system that is still largely in the making.
To this day, the lion’s share of Apple’s AI capabilities are outsourced to third-party systems — an awkward position for a company long renowned for innovation. Now, many are wondering if the world’s most valuable brand will step back for good, let leaders like Google or OpenAI take the lead, and stay squarely in its hardware roots.
What Is Apple’s AI Strategy?
Apple’s approach to artificial intelligence appears to be slow, yet deliberate. Instead of building massive, general-purpose language models and public-facing chatbots, the company favors small acquisitions, selective partnerships and in-house developments that emphasize privacy and on-device processing.
But, despite the perception of being slow, Apple’s approach follows a familiar pattern. The company has always avoided making splashy acquisitions, instead folding in small teams and technologies strategically until it can scale in-house once the timing is right. This playbook has been repeated time after time, from Apple Maps and Music to its custom silicon chips.
So, what some see as Apple being late to the party is actually a calculated turtle-and-hare strategy playing out — or at least that’s what CEO Tim Cook says. Current partnerships with OpenAI and Anthropic keep Apple in the game while it quietly works on its own foundation models. Whether its next step involves buying, partnering or doubling down on its own research, the expectation is that Apple likely won’t stay behind forever.
Apple’s AI Strategy at a Glance
Apple’s approach to AI blends small but targeted acquisitions and carefully chosen partnerships with major players. While it hasn’t made any blockbuster moves just yet, the company seems to be quietly shaping its portfolio and shifting talent around to bring more AI development in-house.
The Acquisitions We Know About
During Apple’s third-quarter earnings call, CEO Tim Cook said the company is “very open to” mergers and acquisitions that “accelerate” its product roadmap, and “are not stuck on a certain size company, although the ones that [Apple has] acquired thus far this year are small in nature.”
Only four of these companies have been identified thus far:
- WhyLabs: An AI observability platform that monitors machine learning models for anomalies to ensure reliable performance. For Apple, this means more secure generative AI and optimized on-device intelligence.
- Common Ground: Formerly known as TrueMeeting, this AI startup focused on creating hyper-realistic digital avatars and virtual meeting experiences. Its tech is likely to fold into Apple’s Vision Pro ecosystem.
- RAC7: The two-person video game developer behind mobile arcade title Sneaky Sasquatch. This is Apple’s first-ever in-house studio, which will focus on creating exclusive content for Apple Arcade.
- Pointable AI: Three days into the year, Apple bought this AI knowledge-retrieval startup that links enterprise data feeds to large language model workflows. The platform lets Apple create reliable LLM-driven applications that can be integrated into on-device search, AI copilots and automation tools.
Internally, Apple is restructuring its ranks to prioritize AI development within the company, according to Cook.
Companies Apple Is Talking To
Apple has reportedly been exploring the purchase of Mistral AI, a French developer now valued at about $14 billion. Mistral has its own chatbot, Le Chat, which runs on its own AI models, as well as various open-source offerings, consumer apps, developer tools and a wide selection of APIs — all while sharing Apple’s hardline stance on privacy. For a while, Apple was also thinking about acquiring Perplexity, but walked away from the multi-billion-dollar deal in part due to mounting concerns over the AI search engine’s controversial web-scraping practices, which clash with Apple’s emphasis on privacy. Instead, Apple plans to become a direct competitor, beefing up its Siri product.
Meanwhile, Apple’s partnership with Anthropic has expanded significantly over the past few months. The collaboration now includes integrating Anthropic’s Claude model into Apple’s Xcode software, creating a “vibe coding” developer tool that helps write, edit and test code more efficiently. Apple is also considering Anthropic’s models in its long overdue Siri overhaul, with the new version expected to launch in early 2026.
But it’s not the only contender. Apple confirmed to 9to5Mac that it will be integrating OpenAI’s GPT-5 model with the iOS 26’s fall launch, and has reportedly reached a formal agreement with Google to test a custom Gemini model for the virtual assistant. Internally known as “World Knowledge Answers,” this feature would let users search information from across their entire device and the web, delivering its findings in AI-generated summaries alongside any relevant text, photos, videos and points of interest in a single, digestible view.
Together, these partnerships with Anthropic, OpenAI and Google give Apple the flexibility to test different AI systems and products and see which fits best into their existing systems, while also keeping their cards close to the chest.
How the Google Search Deal Fits In
Apple’s AI plans are also closely tied to its $20 billion-per-year search deal with Google, which makes Google’s search engine the default in Apple’s Safari browser and Siri. That contract accounts for a massive portion of Apple’s Services revenue — roughly 20 percent — giving the company the financial freedom to take a slower, more deliberate approach to AI.
Fortunately for Apple, this deal is still allowed under Google’s recent antitrust ruling. But if regulators ever choose to limit or terminate the deal, Apple would lose a critical cash stream and be forced to build its own solution. That looming risk could force Apple’s typical cautious approach into a sprint, making partnerships, acquisitions and internal development more urgent.
Why Is Apple Moving Slowly on AI?
Apple’s slow pace largely stems from a push-pull standoff between two top executives at the company. Eddy Cue, the senior vice president of Services, has long championed bold acquisitions to accelerate growth, while Craig Federighi, who oversees Apple’s operating system, wants to focus on building from within. Cue’s camp believes that buying startups is the key to gaining the upper hand in AI, whereas Federighi’s side sees acquisitions as a source of complexity and cultural friction.
At this point, Apple stands in stark contrast to competitors like Google, Meta and Microsoft, which are spending billions to acquire startups and poach top AI talent with hundred-million-dollar signing bonuses and even higher compensation packages. Instead, Apple has stuck to its cautious playbook, which has probably spared it from some costly missteps over the years. But it also leaves it vulnerable. If its rivals continue to outpace it in AI investment and adoption, Apple’s reputation of being “too big to fail” may face its toughest test yet.
Apple’s History of Selective Acquisitions
Apple has made more than 100 acquisitions in its history, but almost all were small, quiet and tech-driven. Now, with $133 billion in spending money, the company has enough to make a mega AI acquisition. But, given Apple’s patterned behavior of restraint, it may choose not to — which is why the current, multi-billion-dollar speculation around the company’s next move is such a big deal.
Here is a quick look at Apple’s past money moves:
1997 — NeXT ($400 million): This was the computer company Steve Jobs founded after leaving Apple. Once acquired, it brought Jobs back to the company as well as the foundation for the operating systems used for macOS and iOS.
2005 — FingerWorks (undisclosed amount): A startup that made gesture-recognition tech that enabled the iPhone’s multi-touch interface.
2008 – PA Semi ($278 million): Chip design firm that gave Apple the know-how to build its own silicon, leading to the A-series processors in iPhones and iPads and the M-series in Macs.
2010 – Siri ($200 million): A voice-assistant startup spun out of SRI International, Siri brought conversational AI to the iPhone and became a core iOS feature.
2012 – AuthenTec ($356 million): The fingerprint sensor company behind Touch ID.
2013 – PrimeSense (about $350 million): The 3D sensing tech that powered Face ID and AR depth cameras.
2014 – Beats Electronics ($3 billion): Apple’s largest-ever acquisition brought premium headphones, the Beats Music streaming service and key executives like Jimmy Lovine and Dr. Dre to the company, both of whom helped jumpstart Apple Music.
2018 – Shazam ($400 million): A music recognition app that was integrated into Siri and Apple Music.
2020 – Xnor.ai ($200 million): An edge AI startup that boosted Apple’s on-device, privacy-first AI by running machine learning models directly on devices, eliminating the need to send data to the cloud.
Does Apple use AI?
Yes, Apple has long incorporated artificial intelligence into its devices through features like Face ID, Siri and Apple Pay. The company’s proprietary AI system, Apple Intelligence, has been integrated across iOS, iPad OS and macOS.
Is Apple building its own AI?
Yes, Apple is actively developing its own artificial intelligence system, called Apple Intelligence. It is also working on a massive Siri upgrade, which is slated to roll out in 2026.
What AI companies has Apple bought?
Some of the AI companies Apple has acquired over the years include:
- WhyLabs: An AI observability platform that monitors machine learning models for anomalies to ensure reliable performance.
- Common Ground: An AI startup focused on creating hyper-realistic digital avatars and virtual meeting experiences. Its tech is likely to fold into Apple’s Vision Pro ecosystem.
- Pointable AI: An AI knowledge-retrieval startup that links enterprise data feeds to large language model workflows.
- Siri: A voice assistant spun out of SRI International that Apple has since integrated as a core iOS feature.
- AuthenTec: A fingerprint sensor company that Apple used to offer Touch ID.
- PrimeSense: 3D sensing technology that powers Apple’s FaceID and AR depth cameras.
- Shazam: A music recognition app that was integrated into Siri and Apple Music.
- Xnor.ai: Edge AI tech used to boost Apple’s on-device, privacy-first AI by running machine learning models directly on the device, without having to send any data to the cloud.
-
Business2 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms1 month ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy2 months ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi