AI Insights
How Physical Operations Organizations Are Using AI to Protect Workers and Increase Efficiency

AI Insights
Transparency is key as AI gets smarter, experts say

To gain the U.S. government’s trust, advanced AI systems must be engineered from the outset with reliable components offering explainability and transparency, senior federal and industry officials said Friday.
“This [topic] is something I think about a lot,” the CIA’s chief AI officer Lakshmi Raman noted at the Billington Cybersecurity Summit. “And in our [community], it’s about how artificial intelligence can assist and be an intelligence amplifier with the human during the process, keeping their eyes on everything that’s happening and ensuring that, at the end, they’re able to help.”
During a panel, Raman and other current and former government officials underscored the importance of guardrails and oversight — particularly as the U.S. military and IC adopt the technology for an ever-increasing range of operations, and experts predict major breakthroughs will emerge in certain areas within the next few years.
“Trust is such a critical dimension for intelligence,” said Sean Batir, a National Geospatial-Intelligence Agency alum and AWS principal tech lead for frontier AI, quantum and robotics.
Frontier AI refers to next-generation systems, also dubbed foundation models, that are considered among the most powerful and complex technologies currently in development. These likely disruptive capabilities hold potential to unlock discoveries that could be immensely helpful or catastrophically harmful to humanity.
Departments across the government have been expanding their use of AI and machine learning over the past several years, but defense and national security agencies were some of the earliest adopters. Recently, in July, questions started swirling after the Pentagon’s Chief Digital and AI Office (CDAO) revealed new, separate deals with xAI, Google, Anthropic and OpenAI to accelerate the enterprise- and military-wide deployment of frontier AI.
“ChatGPT, our flagship product, has upwards of 800 million users every day. So one-tenth of the world is using ChatGPT in various forms,” said Joseph Larson, vice president and head of government business at OpenAI. “At an individual level, AI is there. The more challenging question [for] my job is, with government [use], what does AI adoption look like at an institutional level?”
Larson previously served from 2022 to 2024 as the Pentagon’s first-ever deputy chief digital and AI officer for algorithmic warfare.
“When we talk about institutions, what does that require above and beyond just access to the technology and the foundation models? It requires, really, a partnership. And that partnership extends to questions around infrastructure, around data, and I think, key, around security,” he said. “And what are the security implications for AI as it moves from just something that you communicate with, that informs maybe a workflow, to something that’s part of an agentic system that’s actually operating in your environment and that has its own controls and authorities? So, those institutional challenges are really the ones that are driving our work within the government today.”
Both OpenAI and Anthropic have reportedly disclosed recent efforts to implement new guardrails because their models appear to be approaching high-risk levels for potentially helping produce certain weapons.
On the panel, Anthropic Chief Information Security Officer Jason Clinton noted that “trust is something that is built up over time.”
“In order to do that, there is a human — and there is a supervisory role — for these models. The one thing that those models will never be able to do is to bring humanity to the equation, right? We will. We will always need to bring our perspective, our values, our institutional wisdom, to what we’re asking the models to be doing,” Clinton said.
He and the other panelists spotlighted multiple risks and threats posed by emerging frontier AI applications. For instance, prompt injections are a type of cyberattack that happen when malicious users craft inputs to an AI system to trick the model into performing unintended or dangerous actions, such as revealing sensitive data or generating unsafe material.
“I’m very optimistic that we will solve some of the more fundamental guardrail problems — like prompt injection — within three-ish years, I guess,” Clinton said. “And the models will be getting smarter, so I suspect the ways that we interact with them will evolve towards more like having a virtual coworker beside you, who you interact with and who learns and adapts … and sort of grows with you in your environment.”
The panelists also discussed the potential power of cutting-edge AI to help reduce vulnerabilities in software by automatically finding and fixing bugs in code and zero-day exploits.
“DARPA just ran a competition at DefCon [hacking conference] that demonstrated the future possibilities there,” said Dr. Kathleen Fisher, director of that agency’s Information Innovation Office.
For the event, officials pulled 54 million lines of code across 20 different repositories that were recommended by critical infrastructure operators who use them to do their daily business.
“The teams that ran the competition planted 70 systemic vulnerabilities that were patterned after real vulnerabilities that people have struggled with. The teams found 54 of those systemic vulnerabilities, and they patched 43 of them. More importantly, or at the same time, they found 18 zero-days, and they patched 11 of those. It took about 45 minutes to find and fix vulnerability at a cost of $152. Think about what that might mean in the future — like this is the worst that technology is ever going to be,” Fisher said. “Think about what that might mean in the context of things like Volt Typhoon, and Salt Typhoon, and ransomware that is currently plaguing our hospitals. When a hospital gets affected by a ransomware attack — when it shuts down for any period of time — that puts people’s lives at risk.”
Building on that, Microsoft Federal Chief Technology Officer Jason Payne added: “This is the worst version of the technology we will ever use. I also feel like we have the lowest amount of trust in technology, right? And I think if we all use it more, if we experience it more, we’ll sort of understand what it is and what it’s capable of.”
He continued: “Security, governance and explainability are key themes that we’re looking for to kind of build that trust. And at the end of the day, I think government agencies are looking for organizations that are transparent with their AI systems.”
AI Insights
Anthropic makes its pitch to DC, warning China is ‘moving even faster’ on AI

Anthropic is on a mission this week to set itself apart in Washington, pitching the government’s adoption of artificial intelligence as a national security priority while still emphasizing transparency and basic guardrails on the technology’s rapid development.
The AI firm began making the rounds in Washington, D.C., on Monday, hosting a “Futures Forum” event before company co-founders Jack Clark and Dario Amodei head to Capitol Hill to meet with policymakers.
Anthropic is one of several leading AI firms seeking to expand its business with the federal government, and company leaders are framing the government’s adoption of its technology as a matter of national security.
“American companies like Anthropic and other labs are really pushing the frontiers of what’s possible with AI,” Kate Jensen, Anthropic’s head of sales and partnerships, said during Monday’s event. “But other countries, particularly China, are moving even faster than we are on adoption. They are integrating AI as government services, industrial processes and citizen interactions at massive scale. We cannot afford to develop the world’s most powerful technology and then be slow to deploy it.”
Because of this, Jensen said adoption of AI into the government is “particularly crucial.” According to the Anthropic executive, hundreds of thousands of government workers are already using Claude, but many ideas are “still left untapped.”
“AI provides enormous opportunity to make government more efficient, more responsive and more helpful to all Americans,” she said. “Our government is adopting Claude at an exciting pace, because you too see the paradigm shift that’s happening and realize how much this technology can help all of us.”
Her comments come as the Trump administration urges federal agencies to adopt automation tools and improve workflows. As part of a OneGov deal with the General Services Administration, Anthropic is offering its Claude for Enterprise and Claude for Government models to agencies for $1 for one year.
According to Jensen, the response to the $1 deal has been “overwhelming,” with dozens of agencies expressing interest in the offer. Anthropic’s industry competitors, like OpenAI and Google, also announced similar deals with the GSA to offer their models to the government for a severely discounted price.
Beyond the GSA deal, Anthropic’s federal government push this year has led to its models being made available to U.S. national security customers and staff at the Lawrence Livermore National Lab.
Anthropic’s Claude for Government models have FedRAMP High certification and can be used by federal workers dealing with sensitive, unclassified work. The AI firm announced in April that it partnered with Palantir through the company’s FedStart program, which assists with FedRAMP compliance.
Jensen pointed specifically to Anthropic’s work at the Pentagon’s Chief Digital and AI Office. “We’re leveraging our awarded OTA [other transaction agreement] to scope pilots, we’re bringing our frontier technology and our technical teams to solve operational problems directly alongside the warfighter and to help us all move faster.”
However, as companies including Anthropic seize the opportunity to collaborate with the government, Amodei emphasized the need for “very basic guardrails.” Congress has grappled with how to regulate AI for months, but efforts have stalled amid fierce disagreements.
“We absolutely need to beat China and other authoritarian countries; that is why I’ve advocated for the export controls. But we need to not destroy ourselves in the process,” Amodei said during his fireside chat with Clark. “The thing we’ve always advocated for is basic transparency requirements around models. We always run tests on the models. We reveal the test to the world. We make a point of them, we’re trying to see ahead to the dangers that we present in the future.”
The view is notably different from some of Anthropic’s competitors, which are instead pushing for light-touch regulation of the technology. Amodei, on the other hand, said a “basic transparency requirement” would not hamper innovation, as some other companies have suggested.
AI Insights
California bill to regulate high-risk AI fails to advance state legislature

A California artificial intelligence bill addressing the use of automated decision systems in hiring and other consequential matters failed to advance in the state assembly during the final hours of the 2025 legislative session Friday.
The bill (AB 1018) would have required companies and government agencies to notify individuals when automated decision systems were used for “consequential decisions,” such as employment, housing, health care, and financial services.
Democratic assemblymember Rebecca Bauer-Kahan, the bill’s author, paused voting on the bill until next year to allow for “additional stakeholder engagement and productive conversations with the Governor’s office,” according to a Friday press release from her office.
“This pause reflects our commitment to getting this critical legislation right, not a retreat from our responsibility to protect Californians,” Bauer-Kahan said in a statement. “We remain committed to advancing thoughtful protections against algorithmic discrimination.”
The Business Software Alliance, a global trade association that represents large technology companies and led an opposition campaign against the bill, argued that the legislation would have unfairly subjected companies using AI systems “into an untested audit regime” that risked discouraging responsible adoption of AI tools throughout the state.
“Setting clear, workable, and consistent expectations for high-risk uses of AI ultimately furthers the adoption of technology and more widely spreads its benefits,” Craig Albright, senior vice president at BSA, told StateScoop in a written statement. “BSA believes there is a path forward that sets obligations for companies based on their different roles within the AI value chain and better focuses legislation to ensure that everyday and low-risk uses of AI are not subjected to a vague and confusing regulatory regime.”
Since it was introduced in February, the bill was amended to narrow when AI audits are required, clarify what kinds of systems and “high-stakes” decisions are covered, exempt low-risk tools like spam filters, and add protections for trade secrets while limiting what audit details must be made public. It also refined how lawsuits and appeals work and aligned the bill more clearly with existing civil rights laws.
AB 1018’s failure comes on the heels of the Colorado state legislature voting to delay implementing the Colorado AI Act, the state’s high-risk artificial intelligence legislation, until the end of June next year, five months after the law was supposed to go into effect. Similar to California’s AI bill, Colorado’s Artificial Intelligence Act would also regulate high-risk AI systems in areas like hiring, lending, housing, insurance and government services.
-
Business2 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms1 month ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy2 months ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers3 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education3 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi
-
Funding & Business3 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries