Connect with us

Funding & Business

Anthropic ships automated security reviews for Claude Code as AI-generated vulnerabilities surge

Published

on


Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now


Anthropic launched automated security review capabilities for its Claude Code platform on Wednesday, introducing tools that can scan code for vulnerabilities and suggest fixes as artificial intelligence dramatically accelerates software development across the industry.

The new features arrive as companies increasingly rely on AI to write code faster than ever before, raising critical questions about whether security practices can keep pace with the velocity of AI-assisted development. Anthropic’s solution embeds security analysis directly into developers’ workflows through a simple terminal command and automated GitHub reviews.

“People love Claude Code, they love using models to write code, and these models are already extremely good and getting better,” said Logan Graham, a member of Anthropic’s frontier red team who led development of the security features, in an interview with VentureBeat. “It seems really possible that in the next couple of years, we are going to 10x, 100x, 1000x the amount of code that gets written in the world. The only way to keep up is by using models themselves to figure out how to make it secure.”

The announcement comes just one day after Anthropic released Claude Opus 4.1, an upgraded version of its most powerful AI model that shows significant improvements in coding tasks. The timing underscores an intensifying competition between AI companies, with OpenAI expected to announce GPT-5 imminently and Meta aggressively poaching talent with reported $100 million signing bonuses.


The AI Impact Series Returns to San Francisco – August 5

The next phase of AI is here – are you ready? Join leaders from Block, GSK, and SAP for an exclusive look at how autonomous agents are reshaping enterprise workflows – from real-time decision-making to end-to-end automation.

Secure your spot now – space is limited: https://bit.ly/3GuuPLF


Why AI code generation is creating a massive security problem

The security tools address a growing concern in the software industry: as AI models become more capable at writing code, the volume of code being produced is exploding, but traditional security review processes haven’t scaled to match. Currently, security reviews rely on human engineers who manually examine code for vulnerabilities — a process that can’t keep pace with AI-generated output.

Anthropic’s approach uses AI to solve the problem AI created. The company has developed two complementary tools that leverage Claude’s capabilities to automatically identify common vulnerabilities including SQL injection risks, cross-site scripting vulnerabilities, authentication flaws, and insecure data handling.

The first tool is a /security-review command that developers can run from their terminal to scan code before committing it. “It’s literally 10 keystrokes, and then it’ll set off a Claude agent to review the code that you’re writing or your repository,” Graham explained. The system analyzes code and returns high-confidence vulnerability assessments along with suggested fixes.

The second component is a GitHub Action that automatically triggers security reviews when developers submit pull requests. The system posts inline comments on code with security concerns and recommendations, ensuring every code change receives a baseline security review before reaching production.

How Anthropic tested the security scanner on its own vulnerable code

Anthropic has been testing these tools internally on its own codebase, including Claude Code itself, providing real-world validation of their effectiveness. The company shared specific examples of vulnerabilities the system caught before they reached production.

In one case, engineers built a feature for an internal tool that started a local HTTP server intended for local connections only. The GitHub Action identified a remote code execution vulnerability exploitable through DNS rebinding attacks, which was fixed before the code was merged.

Another example involved a proxy system designed to manage internal credentials securely. The automated review flagged that the proxy was vulnerable to Server-Side Request Forgery (SSRF) attacks, prompting an immediate fix.

“We were using it, and it was already finding vulnerabilities and flaws and suggesting how to fix them in things before they hit production for us,” Graham said. “We thought, hey, this is so useful that we decided to release it publicly as well.”

Beyond addressing the scale challenges facing large enterprises, the tools could democratize sophisticated security practices for smaller development teams that lack dedicated security personnel.

“One of the things that makes me most excited is that this means security review can be kind of easily democratized to even the smallest teams, and those small teams can be pushing a lot of code that they will have more and more faith in,” Graham said.

The system is designed to be immediately accessible. According to Graham, developers can start using the security review feature within seconds of the release, requiring just about 15 keystrokes to launch. The tools integrate seamlessly with existing workflows, processing code locally through the same Claude API that powers other Claude Code features.

Inside the AI architecture that scans millions of lines of code

The security review system works by invoking Claude through an “agentic loop” that analyzes code systematically. According to Anthropic, Claude Code uses tool calls to explore large codebases, starting by understanding changes made in a pull request and then proactively exploring the broader codebase to understand context, security invariants, and potential risks.

Enterprise customers can customize the security rules to match their specific policies. The system is built on Claude Code’s extensible architecture, allowing teams to modify existing security prompts or create entirely new scanning commands through simple markdown documents.

“You can take a look at the slash commands, because a lot of times slash commands are run via actually just a very simple Claude.md doc,” Graham explained. “It’s really simple for you to write your own as well.”

The $100 million talent war reshaping AI security development

The security announcement comes amid a broader industry reckoning with AI safety and responsible deployment. Recent research from Anthropic has explored techniques for preventing AI models from developing harmful behaviors, including a controversial “vaccination” approach that exposes models to undesirable traits during training to build resilience.

The timing also reflects the intense competition in the AI space. Anthropic released Claude Opus 4.1 on Tuesday, with the company claiming significant improvements in software engineering tasks—scoring 74.5% on the SWE-Bench Verified coding evaluation, compared to 72.5% for the previous Claude Opus 4 model.

Meanwhile, Meta has been aggressively recruiting AI talent with massive signing bonuses, though Anthropic CEO Dario Amodei recently stated that many of his employees have turned down these offers. The company maintains an 80% retention rate for employees hired over the last two years, compared to 67% at OpenAI and 64% at Meta.

Government agencies can now buy Claude as enterprise AI adoption accelerates

The security features represent part of Anthropic’s broader push into enterprise markets. Over the past month, the company has shipped multiple enterprise-focused features for Claude Code, including analytics dashboards for administrators, native Windows support, and multi-directory support.

The U.S. government has also endorsed Anthropic’s enterprise credentials, adding the company to the General Services Administration’s approved vendor list alongside OpenAI and Google, making Claude available for federal agency procurement.

Graham emphasized that the security tools are designed to complement, not replace, existing security practices. “There’s no one thing that’s going to solve the problem. This is just one additional tool,” he said. However, he expressed confidence that AI-powered security tools will play an increasingly central role as code generation accelerates.

The race to secure AI-generated software before it breaks the internet

As AI reshapes software development at an unprecedented pace, Anthropic’s security initiative represents a critical recognition that the same technology driving explosive growth in code generation must also be harnessed to keep that code secure. Graham’s team, called the frontier red team, focuses on identifying potential risks from advanced AI capabilities and building appropriate defenses.

“We have always been extremely committed to measuring the cybersecurity capabilities of models, and I think it’s time that defenses should increasingly exist in the world,” Graham said. The company is particularly encouraging cybersecurity firms and independent researchers to experiment with creative applications of the technology, with an ambitious goal of using AI to “review and preventatively patch or make more secure all of the most important software that powers the infrastructure in the world.”

The security features are available immediately to all Claude Code users, with the GitHub Action requiring one-time configuration by development teams. But the bigger question looming over the industry remains: Can AI-powered defenses scale fast enough to match the exponential growth in AI-generated vulnerabilities?

For now, at least, the machines are racing to fix what other machines might break.



Source link
Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Funding & Business

How I Stopped A Platform Ransom Against My Startup

Published

on


By Zainab Ghadiyali

Earlier this year, on a morning in January, I walked into a founder’s nightmare.

My company, Eat Cook Joy, which builds AI-powered tools for private chefs, was two weeks away from its public launch. For two and a half years, I had been preparing and collecting the most sophisticated and comprehensive data set for the private chef industry.

But as I sat down at my computer, I was locked out of my site. Access to the platform I had painstakingly assembled and worth millions of dollars vanished.

Thanks to intelligence from the FBI, I quickly learned that the attack on my company’s site showed all the signs of a sophisticated international crime group that targets early-stage startups for ransom.

Zainab Ghadiyali

If the statistics are right, most business leaders in my shoes would have chosen to make a deal with the extortionist and move on. But when the veiled threat of a ransom request came, capitulation was out of the question.

For one, investors had entrusted me with their money. Secondly, my training as an engineer and entrepreneur simply wouldn’t let me back down. Having built large-scale infrastructure at Facebook and Airbnb, I knew how to navigate complex systems under duress. I approached recovery like any other business problem: set a goal, assemble a team, build a strategy, execute.

Clear goals with the right team

From the start, my goal was non-negotiable: recover all data, restore the platform and bring it online securely. I also knew I needed the right team — legal advisers who understood the law, anticipated the criminals’ next move, and had strong law-enforcement ties.

Just as important, they had to believe in the mission. Finding that wasn’t easy. Most advisers dismissed the loss or urged me to simply file a claim, a demoralizing response for a founder in crisis.

Amy Mushahwar of Lowenstein Sandler was different. She understood the mission and what it meant to me. I never doubted their resolve. Amy’s colleague Tricia Wagner, for example, spent her birthday consolidating key evidence for law enforcement.

Fearless strategic execution

Our strategy was designed to corner the crime ring with their own lies. It rested on three pillars: patience, detailed evidence and fearlessness.

Essentially, if the crime ring refused a demand (e.g., access or data return) or claimed that something had been deleted, like AI training data or validation, we presented them with detailed evidence that directly contradicted their claims.

The strategy worked because I knew the technical details of the site better than they did. Drawing on my legal team’s relationships, we also leveraged the expertise of law enforcement.

With multiple negotiated encounters, letters and interactions, we countered the endless string of denials that data wasn’t available and access to certain platforms was not essential. But I knew the platforms and data that were essential and fought for them.

While negotiations played out, I shifted Eat Cook Joy’s operations offline to protect customer data. At the same time, we doubled engineering resources, addressed security vulnerabilities and reinforced protections around intellectual property.  In the end, our strategy worked: Within weeks of the ransom demand, the crime ring had transferred everything back.

Lessons for founders

It was an extraordinary result. But successfully resisting a ransomware attack shouldn’t be as uncommon as it is. I believe my experience offers lessons for other startups facing the threat of ransom:

  • Set a clear goal. Define success upfront — full recovery with no compromise.
  • Do not pay. Paying ransom fuels more attacks and doesn’t guarantee results.  There may be circumstances where payment is necessary (such as lack of backups and the need for an encryption key), but if these needs don’t exist, fight on.
  • Build the right team. The right team with the right goal can navigate even the toughest crises. Look for lawyers with relevant experience and good relationships with law enforcement. But just as important, look for their commitment to the stated goal.
  • Document everything. Evidence strengthens your position with both law enforcement and adversaries.
  • Safeguard trust and capital. Protecting customers and investors must remain the top priority. Move or reinforce bank accounts, if necessary, to prevent direct monetary theft.

Two weeks after regaining full control, Eat Cook Joy was back online and taking production traffic. Within six months of launch, our chef network tripled. Eat Cook Joy is now on track to generate $1 million in annualized revenue this fall, making it one of the fastest-growing startups in the food space. What could have been a devastating setback instead became a defining moment for my company.


Zainab Ghadiyali is a founder and builder who strongly believes that technology should expand access, not gatekeep it. She previously founded Eat Cook Joy, one of the fastest-growing food tech startups in the U.S., and built products at Facebook, Airbnb and Canva. Recognized as one of Foreign Policy’s Top 100 Global Thinkers for her work at the intersection of tech and inclusion, she is now building Stackbirds, a platform democratizing AI agent technology to empower the next generation of entrepreneurs.

Illustration: Dom Guzman


Stay up to date with recent funding rounds, acquisitions, and more with the
Crunchbase Daily.



Source link

Continue Reading

Funding & Business

PwC Deal Lead On AI’s Hottest M&A Targets

Published

on

Along with being the top recipient for venture capital funding, the artificial intelligence sector has also been an area ripe for M&A activity. Bigger companies are buying smaller companies. Big companies are buying other big companies. And companies are hiring away top talent from other companies.

All in all, there has been no shortage of deal flow in the M&A space for AI. Across the three half-year periods from H1 2024 to H1 2025, AI M&A volume climbed steadily, reaching 262 deals in the most recent half, Crunchbase data shows. That marks a 35% increase year over year.

Notable deals include OpenAI’s acquisition of 4-year-old product analytics startup Statsig for $1.1 billion earlier this month. The startup’s founder and CEO is set to join OpenAI, parent of ChatGPT, as chief technology officer of its applications business. That follows OpenAI’s somewhat quiet announcement in May of its $6.5 billion acquisition of Io, a little-known but highly technical company focused on model deployment and orchestration.

Also earlier this month, collaborative software giant Atlassian announced that it had agreed to acquire The Browser Co. for about $610 million in cash.

On the surface, it looks like a market firing on all cylinders. But the data tells a more nuanced story. The median deal size for startup M&A stayed flat at $67.5 million in the first half of 2025, while the average soared past $435 million, per Crunchbase data.

To get a better sense of just what all this M&A activity means, Crunchbase News conducted an email interview with Kevin Desai, U.S. deals platform leader for PricewaterhouseCoopers. The interview has been edited for brevity and clarity.

Crunchbase News: What sorts of AI technology and companies are enterprises interested in buying right now?

Kevin Desai of PwC

Desai: Dealmaking revolving around AI is a whole ecosystem. Enterprises are currently acquiring companies focused on AI agents, identity and security, and edge computing. This includes investments in the underlying infrastructure like power, data centers, and compute control. They’re also buying applied AI software that embeds copilots and agentic workflows into existing systems like CRM, ERP and IT service management.

Firms with AI-driven detection and identity solutions are in high demand, as well as smaller companies that bring niche capabilities or specialized talent in model engineering, design, and workflow integration. Enterprises are also looking at edge interference hardware and software to cut latency, cost and privacy risk. Increasingly, buyers are also interested in GPU/ASIC supply and custom-silicon programs to secure cost and scale.

What do deals like Atlassian’s Browser Co. buy tell us about the market?

While we can’t comment on specific deals/companies, these types of deals reveal that companies are focused on how and where workers work. As SaaS has proliferated, our ways of working have shifted. AI is the next frontier of that shift, and companies will continue to focus on the agents they use and how to drive productivity in their workforce.

It also signals that buyers are willing to pay a premium for talent, agent-driven innovation, and user experience expertise, even outside their core product lines, as AI shifts the boundaries of where productivity and collaboration happen.

With the large LLM companies controlling so much of the foundational technology, what are the moats or differentiations that smaller startups have that interest buyers?

While foundational model providers hold significant control, smaller startups are carving out defensible positions in key areas. Startups that integrate trust, governance, and compliance into their solutions are especially attractive as companies look to mitigate AI risks.

The most valuable moats for buyers include proprietary and regulated vertical data and workflows, identity and policy controls for agents, and on-device/edge advantages where latency and privacy are critical. Many startups are also leading the charge in agentic user experiences and vertical applications.

Do you expect to see more blockbuster buys of big established startups, or smaller, quiet purchases and acquisitions. Why?

We are likely to see both. On one hand, established technology companies will continue to pursue blockbuster acquisitions when an asset represents a strategic leap forward in infrastructure, distribution, or user interface.

On the other hand, many companies will prioritize smaller acquisitions focused on talent and niche capabilities, allowing them to accelerate product roadmaps without the complexity of a megadeal. With overall deal volumes flat but tech deal values rising, the data suggests buyers are being highly selective, willing to stretch on valuations for assets that deliver transformational value, while relying on targeted capability buys to fill specific gaps quickly and efficiently.

What are the specific sectors most likely to be targeted next?

The next wave of acquisitions will likely target AI bottlenecks: identity, browsers, ops automation and regulated data. The physical internet of AI also remains at the top of the list, driving investment in power, data centers, semiconductors, and networking. Cybersecurity is another priority, as enterprises look to secure data and manage the risks of increasingly autonomous AI systems. Vertical software markets are particularly attractive, with strong buyer interest in healthcare solutions for clinical decisions and revenue cycle management, and financial services tools for risk, compliance, and wealth management. Finally, we can expect continued momentum in identity and governance, secure enterprise browsers, and agentic operations.

What are the current and projected valuations for AI-focused acquisitions?Valuations for AI-focused acquisitions remain robust, with heightened competition for premium assets driving deal values upward even as overall M&A volumes remain relatively flat. PwC research indicates that technology deal values have risen by approximately 15% as buyers race to secure AI capabilities, signaling a willingness to pay for assets that offer defensible advantages.

Looking ahead, we expect assets with proprietary data, regulatory moats and tailored user experiences will continue to attract strong premiums, while buyers will remain more disciplined in less differentiated market segments.

What are the long-term strategic implications of this M&A rush?

In the long run, the surge of AI-related M&A activity could reshape the technology landscape around a few dominant ecosystems, anchored by companies that control both the infrastructure and the user-facing interfaces where AI delivers value.

Enterprises that actively pursue acquisitions now are positioning themselves to fundamentally reinvent their platforms, processes, and business models for an AI-dominant future. At the same time, the rush to acquire talent and accelerate roadmaps means that integration will be critical; only companies that can harmonize new capabilities and embed them into workflows at scale will capture the full strategic benefit.

Related Crunchbase query:

Related reading:

Illustration: Dom Guzman


Stay up to date with recent funding rounds, acquisitions, and more with the
Crunchbase Daily.



Source link

Continue Reading

Funding & Business

UK Stocks Are So Out of Favor They’re Now a Top Contrarian Trade

Published

on

Investors are deeply bearish on UK stocks, with the Bank of America Corp. strategists calling the market one of the top contrarian trades.



Source link

Continue Reading

Trending