Connect with us

AI Insights

Anthropic makes its pitch to DC, warning China is ‘moving even faster’ on AI 

Published

on


Anthropic is on a mission this week to set itself apart in Washington, pitching the government’s adoption of artificial intelligence as a national security priority while still emphasizing transparency and basic guardrails on the technology’s rapid development. 

The AI firm began making the rounds in Washington, D.C., on Monday, hosting a “Futures Forum” event before company co-founders Jack Clark and Dario Amodei head to Capitol Hill to meet with policymakers. 

Anthropic is one of several leading AI firms seeking to expand its business with the federal government, and company leaders are framing the government’s adoption of its technology as a matter of national security. 

“American companies like Anthropic and other labs are really pushing the frontiers of what’s possible with AI,” Kate Jensen, Anthropic’s head of sales and partnerships, said during Monday’s event. “But other countries, particularly China, are moving even faster than we are on adoption. They are integrating AI as government services, industrial processes and citizen interactions at massive scale. We cannot afford to develop the world’s most powerful technology and then be slow to deploy it.”

Because of this, Jensen said adoption of AI into the government is “particularly crucial.” According to the Anthropic executive, hundreds of thousands of government workers are already using Claude, but many ideas are “still left untapped.” 

“AI provides enormous opportunity to make government more efficient, more responsive and more helpful to all Americans,” she said. “Our government is adopting Claude at an exciting pace, because you too see the paradigm shift that’s happening and realize how much this technology can help all of us.”

Her comments come as the Trump administration urges federal agencies to adopt automation tools and improve workflows. As part of a OneGov deal with the General Services Administration, Anthropic is offering its Claude for Enterprise and Claude for Government models to agencies for $1 for one year. 

According to Jensen, the response to the $1 deal has been “overwhelming,” with dozens of agencies expressing interest in the offer. Anthropic’s industry competitors, like OpenAI and Google, also announced similar deals with the GSA to offer their models to the government for a severely discounted price. 

Beyond the GSA deal, Anthropic’s federal government push this year has led to its models being made available to U.S. national security customers and staff at the Lawrence Livermore National Lab. 

Anthropic’s Claude for Government models have FedRAMP High certification and can be used by federal workers dealing with sensitive, unclassified work. The AI firm announced in April that it partnered with Palantir through the company’s FedStart program, which assists with FedRAMP compliance. 

Jensen pointed specifically to Anthropic’s work at the Pentagon’s Chief Digital and AI Office. “We’re leveraging our awarded OTA [other transaction agreement] to scope pilots, we’re bringing our frontier technology and our technical teams to solve operational problems directly alongside the warfighter and to help us all move faster.” 

However, as companies including Anthropic seize the opportunity to collaborate with the government, Amodei emphasized the need for “very basic guardrails.” Congress has grappled with how to regulate AI for months, but efforts have stalled amid fierce disagreements. 

“We absolutely need to beat China and other authoritarian countries; that is why I’ve advocated for the export controls. But we need to not destroy ourselves in the process,” Amodei said during his fireside chat with Clark. “The thing we’ve always advocated for is basic transparency requirements around models. We always run tests on the models. We reveal the test to the world. We make a point of them, we’re trying to see ahead to the dangers that we present in the future.” 

The view is notably different from some of Anthropic’s competitors, which are instead pushing for light-touch regulation of the technology. Amodei, on the other hand, said a “basic transparency requirement” would not hamper innovation, as some other companies have suggested. 


Written by Miranda Nazzaro

Miranda Nazzaro is a reporter for FedScoop in Washington, D.C., covering government technology. Prior to joining FedScoop, Miranda was a reporter at The Hill, where she covered technology and politics. She was also a part of the digital team at WJAR-TV in Rhode Island, near her hometown in Connecticut. She is a graduate of the George Washington University School of Media and Pubic Affairs. You can reach her via email at miranda.nazzaro@fedscoop.com or on Signal at miranda.952.



Source link

AI Insights

Good governance holds the key to successful AI innovation

Published

on


Organizations often balk at governance as an obstacle to innovation. But in the fast-moving world of artificial intelligence (AI), a proper governance strategy is crucial to driving momentum, including building trust in the technology and delivering use cases at scale.

Building trust in AI, in particular, is a major hurdle for AI adoption and successful business outcomes. Employees are concerned about AI’s impact on their job, and the risk management team worries about safe and accurate use of AI. At the same time, customers are hesitant about how their personal data is being leveraged. Robust governance strategies help address these trust issues while laying the groundwork for standardized processes and frameworks that support AI use at scale. Governance is also essential to compliance — an imperative for companies in highly regulated industries such as financial services and healthcare.

“Done right, governance isn’t putting on the brakes as it’s often preconceived,” says Camilla Austerberry, director at KPMG and co-lead of the Trusted AI capability, which helps organizations accelerate AI adoption and safe scaling through the implementation of effective governance and controls across the AI life cycle. “Governance can actually be a launchpad, clearing the path for faster, safer, and more scalable innovation.”

Best practices for robust AI governance

Despite its role as a crucial AI enabler, most enterprises struggle with governance, in part because of the fast-moving technology and regulatory climate as well as an out-of-sync organizational culture. According to Foundry’s AI Priorities Study 2025, governance, along with IT integration and security, ranks among the top hurdles for AI implementations, cited by 47% of the responding organizations.

To be strategic about AI governance, experts recommend the following:

Focus on the basics. Because AI technologies and regulations are evolving so quickly, many organizations are overwhelmed by how to build a formal governance strategy. It’s important to create consensus on how AI strategy aligns with business strategy while establishing the proper structure and ownership of AI governance. “My advice is to be proportionate,” Austerberry says. “As the use of AI evolves, so will your governance, but you have to start somewhere. You don’t have to have it all baked in from the start.”

Include employees in the process. It’s important to give people easy access to the technology and encourage widespread use and experimentation. Companywide initiatives that gamify AI encourage adoption and promote feedback for AI governance frameworks. Establishing ambassador or champion programs is another way to engage employees by way of trusted peers, and an AI center of excellence can play a role in developing a foundational understanding of AI’s potential as well as the risks.

“Programs that are successful within organizations go that extra mile of human touch,” says Steven Tiell, global head of AI Governance Advisory at SAS Institute. “The more stakeholders you include in that conversation early, the better.”

Emphasize governance’s relationship to compliance. Effective governance means less friction, especially when it comes to regulators and risk auditors slowing down AI implementation. Given the varied global regulatory climate, organizations should take a forward stance and think beyond compliance to establish governance with lasting legs. “You don’t want to have to change business strategy or markets when a government changes regulations or adds new ones,” says Tiell. “You want to be prepared for whatever comes your way.”

To learn more, watch this webinar.



Source link

Continue Reading

AI Insights

Transparency is key as AI gets smarter, experts say

Published

on


To gain the U.S. government’s trust, advanced AI systems must be engineered from the outset with reliable components offering explainability and transparency, senior federal and industry officials said Friday.

“This [topic] is something I think about a lot,” the CIA’s chief AI officer Lakshmi Raman noted at the Billington Cybersecurity Summit. “And in our [community], it’s about how artificial intelligence can assist and be an intelligence amplifier with the human during the process, keeping their eyes on everything that’s happening and ensuring that, at the end, they’re able to help.”

During a panel, Raman and other current and former government officials underscored the importance of guardrails and oversight — particularly as the U.S. military and IC adopt the technology for an ever-increasing range of operations, and experts predict major breakthroughs will emerge in certain areas within the next few years.

“Trust is such a critical dimension for intelligence,” said Sean Batir, a National Geospatial-Intelligence Agency alum and AWS principal tech lead for frontier AI, quantum and robotics.

Frontier AI refers to next-generation systems, also dubbed foundation models, that are considered among the most powerful and complex technologies currently in development. These likely disruptive capabilities hold potential to unlock discoveries that could be immensely helpful or catastrophically harmful to humanity. 

Departments across the government have been expanding their use of AI and machine learning over the past several years, but defense and national security agencies were some of the earliest adopters. Recently, in July, questions started swirling after the Pentagon’s Chief Digital and AI Office (CDAO) revealed new, separate deals with xAI, Google, Anthropic and OpenAI to accelerate the enterprise- and military-wide deployment of frontier AI. 

“ChatGPT, our flagship product, has upwards of 800 million users every day. So one-tenth of the world is using ChatGPT in various forms,” said Joseph Larson, vice president and head of government business at OpenAI. “At an individual level, AI is there. The more challenging question [for] my job is, with government [use], what does AI adoption look like at an institutional level?”

Larson previously served from 2022 to 2024 as the Pentagon’s first-ever deputy chief digital and AI officer for algorithmic warfare. 

“When we talk about institutions, what does that require above and beyond just access to the technology and the foundation models? It requires, really, a partnership. And that partnership extends to questions around infrastructure, around data, and I think, key, around security,” he said. “And what are the security implications for AI as it moves from just something that you communicate with, that informs maybe a workflow, to something that’s part of an agentic system that’s actually operating in your environment and that has its own controls and authorities? So, those institutional challenges are really the ones that are driving our work within the government today.”

Both OpenAI and Anthropic have reportedly disclosed recent efforts to implement new guardrails because their models appear to be approaching high-risk levels for potentially helping produce certain weapons.

On the panel, Anthropic Chief Information Security Officer Jason Clinton noted that “trust is something that is built up over time.”

“In order to do that, there is a human — and there is a supervisory role — for these models. The one thing that those models will never be able to do is to bring humanity to the equation, right? We will. We will always need to bring our perspective, our values, our institutional wisdom, to what we’re asking the models to be doing,” Clinton said.

He and the other panelists spotlighted multiple risks and threats posed by emerging frontier AI applications. For instance, prompt injections are a type of cyberattack that happen when malicious users craft inputs to an AI system to trick the model into performing unintended or dangerous actions, such as revealing sensitive data or generating unsafe material.

“I’m very optimistic that we will solve some of the more fundamental guardrail problems — like prompt injection — within three-ish years, I guess,” Clinton said. “And the models will be getting smarter, so I suspect the ways that we interact with them will evolve towards more like having a virtual coworker beside you, who you interact with and who learns and adapts … and sort of grows with you in your environment.”

The panelists also discussed the potential power of cutting-edge AI to help reduce vulnerabilities in software by automatically finding and fixing bugs in code and zero-day exploits.

“DARPA just ran a competition at DefCon [hacking conference] that demonstrated the future possibilities there,” said Dr. Kathleen Fisher, director of that agency’s Information Innovation Office.

For the event, officials pulled 54 million lines of code across 20 different repositories that were recommended by critical infrastructure operators who use them to do their daily business. 

“The teams that ran the competition planted 70 systemic vulnerabilities that were patterned after real vulnerabilities that people have struggled with. The teams found 54 of those systemic vulnerabilities, and they patched 43 of them. More importantly, or at the same time, they found 18 zero-days, and they patched 11 of those. It took about 45 minutes to find and fix vulnerability at a cost of $152. Think about what that might mean in the future — like this is the worst that technology is ever going to be,” Fisher said. “Think about what that might mean in the context of things like Volt Typhoon, and Salt Typhoon, and ransomware that is currently plaguing our hospitals. When a hospital gets affected by a ransomware attack — when it shuts down for any period of time — that puts people’s lives at risk.”

Building on that, Microsoft Federal Chief Technology Officer Jason Payne added: “This is the worst version of the technology we will ever use. I also feel like we have the lowest amount of trust in technology, right? And I think if we all use it more, if we experience it more, we’ll sort of understand what it is and what it’s capable of.”

He continued: “Security, governance and explainability are key themes that we’re looking for to kind of build that trust. And at the end of the day, I think government agencies are looking for organizations that are transparent with their AI systems.”


Written by Brandi Vincent

Brandi Vincent is DefenseScoop’s Pentagon correspondent. She reports on disruptive technologies and associated policies impacting Defense Department and military personnel. Prior to joining SNG, she produced a documentary and worked as a journalist at Nextgov, Snapchat and NBC Network. Brandi grew up in Louisiana and received a master’s degree in journalism from the University of Maryland. She was named Best New Journalist at the 2024 Defence Media Awards.



Source link

Continue Reading

AI Insights

This sociologist worries about AI’s impact on US workers

Published

on


The unemployment rate has ticked up in recent months and layoffs have been in the headlines all year. There are a plethora of reasons behind that labor market shift, but one that’s been cited directly by some companies like Salesforce, Indeed and Glassdoor is artificial intelligence adoption.

While it’s clear that AI is rapidly changing many aspects of our economic landscape, it remains to be seen what the technology’s ultimate impact on the labor market will actually be.

Jeff Dixon is a sociology professor at College of the Holy Cross. He wrote a piece for The Conversation titled, “The US really is unlike other rich countries when it comes to job insecurity — and AI could make it even more ‘exceptional.’” He joined “Marketplace” host Kimberly Adams to discuss his concerns with AI’s long-term effects on the stability of the American worker.

To hear their conversation, use the audio player above.

Related Topics

Collections:



Source link

Continue Reading

Trending