Connect with us

AI Insights

Anthropic’s auto-clicking AI Chrome extension raises browser-hijacking concerns

Published

on



The company tested 123 cases representing 29 different attack scenarios and found a 23.6 percent attack success rate when browser use operated without safety mitigations.

One example involved a malicious email that instructed Claude to delete a user’s emails for “mailbox hygiene” purposes. Without safeguards, Claude followed these instructions and deleted the user’s emails without confirmation.

Anthropic says it has implemented several defenses to address these vulnerabilities. Users can grant or revoke Claude’s access to specific websites through site-level permissions. The system requires user confirmation before Claude takes high-risk actions like publishing, purchasing, or sharing personal data. The company has also blocked Claude from accessing websites offering financial services, adult content, and pirated content by default.

These safety measures reduced the attack success rate from 23.6 percent to 11.2 percent in autonomous mode. On a specialized test of four browser-specific attack types, the new mitigations reportedly reduced the success rate from 35.7 percent to 0 percent.

Independent AI researcher Simon Willison, who has extensively written about AI security risks and coined the term “prompt injection” in 2022, called the remaining 11.2 percent attack rate “catastrophic,” writing on his blog that “in the absence of 100% reliable protection I have trouble imagining a world in which it’s a good idea to unleash this pattern.”

By “pattern,” Willison is referring to the recent trend of integrating AI agents into web browsers. “I strongly expect that the entire concept of an agentic browser extension is fatally flawed and cannot be built safely,” he wrote in an earlier post on similar prompt-injection security issues recently found in Perplexity Comet.

The security risks are no longer theoretical. Last week, Brave’s security team discovered that Perplexity’s Comet browser could be tricked into accessing users’ Gmail accounts and triggering password recovery flows through malicious instructions hidden in Reddit posts. When users asked Comet to summarize a Reddit thread, attackers could embed invisible commands that instructed the AI to open Gmail in another tab, extract the user’s email address, and perform unauthorized actions. Although Perplexity attempted to fix the vulnerability, Brave later confirmed that its mitigations were defeated and the security hole remained.

For now, Anthropic plans to use its new research preview to identify and address attack patterns that emerge in real-world usage before making the Chrome extension more widely available. In the absence of good protections from AI vendors, the burden of security falls on the user, who is taking a large risk by using these tools on the open web. As Willison noted in his post about Claude for Chrome, “I don’t think it’s reasonable to expect end users to make good decisions about the security risks.”



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Insights

PR News | Will Artificial Intelligence Destroy the Communications Industry?

Published

on






Simon Erskine Locke

I recently met a leader in the communications industry, and as we were chatting over coffee, he shared that he’s been hearing the phrase “two things can be true at the same time” a lot recently. This is also something I’ve been saying for a couple of years in discussions around politics, AI, and a variety of other issues.

In a polarized world in which opinions are shared as fact, data and statistics are made to fit ideologies and the truth doesn’t seem to matter, expressing the view that two seemingly contradictory perspectives can both be true is a pragmatic way to find common ground. It recognizes that there are different ways to look at the same issues.

While making the effort to recognize different perspectives is healthy, idealogues (on either side of the political spectrum) are rarely interested in recognizing that there may be another side to an argument. When you are devoted to a particular position, the idea of an alternate version — or even the acknowledgement that there may be grey between black and white — creates cognitive dissonance.

Why bring this up? In part, because many of the discussions around AI seem to be somewhat bipolar.

For many, AI is still the shiny new tool that will write great emails, automate the lengthy process of engaging with journalists, or lead to faster and easier content generation. For others, AI will kill jobs, dumb down the industry, or lead us to an existential doomsday in which the rise of content leads to the fall of engagement.

As someone who has spent significant time with AI companies, building tools, working with various LLMs, and discussing the impact of AI with lawmakers, I firmly believe that there are reasons to be optimistic and pessimistic. It’s not all black and white.

One way to frame the discussion of AI is to think of it like electricity. Electricity is key to powering the economy and it drives machines that do a lot of different things. Some of those are good. Some are not. Electricity gives us light, but it can also kill us.

AI, like electricity, is not intrinsically good or bad. It’s what we do with it that matters. As communicators, we have agency. We decide which choices will shape the future of the industry. We are not powerless.

We are responsible for making decisions about how AI is employed. And, consequently, if we get this wrong, shame on us. If communicators ultimately put the industry out of business by automating the engagement process with journalists, mass producing content to game LLM algorithms, and delegating thinking to chatbots — rather than helping the next generation of communicators hone their writing, editing, fact checking, and critical thinking skills — that will be on us.

Equally, if we don’t leverage AI, we will miss an opportunity. AI can help streamline workflows and its access to the vast body of knowledge on the internet can lead to smarter, more informed engagement with reporters and impactful content.

A key takeaway from conversations with AI startups is that they are now able to do things that were simply not possible two years ago. One is making the restaurant booking process more efficient, leading to greater longevity of the businesses they work with – which keeps staff employed. Another company’s voice technology is enabling local government to serve constituents at any time and in any language.

As with every other generational technology shift, some jobs will disappear, and others will be created. Communicators need to avoid both being Panglossian, and the trap of seeing AI as the end of days.

Finding the right use cases and effectively implementing the technology will be essential. The customer service line of a major financial institution states, “We are using AI to deliver exceptional customer service”, only to require the customer to repeat the same basic information three times. This underscores the distance between AI’s potential and the imperfect experience most of us see every day.

Pragmatic agency and corporate communications leaders will continue to experiment, invest time to understand what is now possible with AI. They will need to implement tools selectively, while carefully considering the impact of decisions on the industry in the years to come.

At this stage, there is an element of the blind leading the blind with AI. Startups are not omniscient. Communicators looking at applications as a magic bullet are going to be sorely disappointed. We are already seeing questions about the returns on the rush of gold into AI, significant gaps between the vision and experience, and the dark side of the technology in areas such as rising fraud and malicious deepfakes. As I have written previously AI is creating new problems to solve – and is a driving force behind new solutions including content provenance authentication.

Just because you can do something doesn’t mean you should — without careful consideration of use cases, consequences and implementation. AI has both enormous potential but also brings a whole new set of challenges and, potentially, existential risks. The idea that these two seemingly opposite things can be true underscores the weight of responsibility we have to get this right.

***

Simon Erskine Locke is founder & CEO of CommunicationsMatch™ and cofounder & CEO of Tauth.io, which provides trusted content authentication based on C2PA standards. He is a former head of communications functions at Prudential Financial, Morgan Stanley and Deutsche Bank, and founder of communications consultancies.





Source link

Continue Reading

AI Insights

Anthropic to pay authors $1.5 billion to settle lawsuit over pirated chatbot training material

Published

on


NEW YORK (AP) — Artificial intelligence company Anthropic has agreed to pay $1.5 billion to settle a class-action lawsuit by…

NEW YORK (AP) — Artificial intelligence company Anthropic has agreed to pay $1.5 billion to settle a class-action lawsuit by book authors who say the company took pirated copies of their works to train its chatbot.

The landmark settlement, if approved by a judge as soon as Monday, could mark a turning point in legal battles between AI companies and the writers, visual artists and other creative professionals who accuse them of copyright infringement.

The company has agreed to pay authors about $3,000 for each of an estimated 500,000 books covered by the settlement.

“As best as we can tell, it’s the largest copyright recovery ever,” said Justin Nelson, a lawyer for the authors. “It is the first of its kind in the AI era.”

A trio of authors — thriller novelist Andrea Bartz and nonfiction writers Charles Graeber and Kirk Wallace Johnson — sued last year and now represent a broader group of writers and publishers whose books Anthropic downloaded to train its chatbot Claude.

A federal judge dealt the case a mixed ruling in June, finding that training AI chatbots on copyrighted books wasn’t illegal but that Anthropic wrongfully acquired millions of books through pirate websites.

If Anthropic had not settled, experts say losing the case after a scheduled December trial could have cost the San Francisco-based company even more money.

“We were looking at a strong possibility of multiple billions of dollars, enough to potentially cripple or even put Anthropic out of business,” said William Long, a legal analyst for Wolters Kluwer.

U.S. District Judge William Alsup of San Francisco has scheduled a Monday hearing to review the settlement terms.

Books are known to be important sources of data — in essence, billions of words carefully strung together — that are needed to build the AI large language models behind chatbots like Anthropic’s Claude and its chief rival, OpenAI’s ChatGPT.

Alsup’s June ruling found that Anthropic had downloaded more than 7 million digitized books that it “knew had been pirated.” It started with nearly 200,000 from an online library called Books3, assembled by AI researchers outside of OpenAI to match the vast collections on which ChatGPT was trained.

Debut thriller novel “The Lost Night” by Bartz, a lead plaintiff in the case, was among those found in the Books3 dataset.

Anthropic later took at least 5 million copies from the pirate website Library Genesis, or LibGen, and at least 2 million copies from the Pirate Library Mirror, Alsup wrote.

The Authors Guild told its thousands of members last month that it expected “damages will be minimally $750 per work and could be much higher” if Anthropic was found at trial to have willfully infringed their copyrights. The settlement’s higher award — approximately $3,000 per work — likely reflects a smaller pool of affected books, after taking out duplicates and those without copyright.

On Friday, Mary Rasenberger, CEO of the Authors Guild, called the settlement “an excellent result for authors, publishers, and rightsholders generally, sending a strong message to the AI industry that there are serious consequences when they pirate authors’ works to train their AI, robbing those least able to afford it.”

Copyright
© 2025 The Associated Press. All rights reserved. This material may not be published, broadcast, written or redistributed.



Source link

Continue Reading

AI Insights

Associate professor in ECE advances artificial intelligence collaboration across devices regardless of connection speed

Published

on


Can smart devices collaborate to train artificial intelligence (AI) models when they experience poor internet connections? Yes, and Xiaowen Gong, the Godbold Associate Professor in electrical and computer engineering, can prove it.

Gong’s recently completed National Science Foundation-funded research, “Quality-Aware Distributed Computation for Wireless Federated Learning: Channel-Aware User Selection, Mini-Batch Size Adaptation, and Scheduling,” demonstrates how smart devices can collaborate to build better AI models regardless of connection quality, turning network limitations from a barrier into a manageable constraint.

Originally funded and commissioned in 2021, his work paves the way for smarter, faster and safer technologies — powering innovations that could make robots more capable, augmented reality/virtual reality experiences more immersive, vehicles more autonomous and wireless systems more intelligent.

“Our algorithms enable federated learning in wireless networked systems where devices often have unreliable, time-varying and heterogeneous communication and computation capabilities,” Gong said. “Our research improves learning accuracy and accelerates the training process, all while enabling devices to participate with greater flexibility.”

Federated learning allows multiple devices — like smartphones, tablets or sensors — to collaboratively train an AI model without sharing their raw data. Instead of sending sensitive information to a central server, devices process data locally and share only the learning updates. This approach protects privacy while enabling AI systems to learn from diverse data sources.

“AI isn’t just something that lives in massive data centers anymore,” Gong said. “It’s happening on the devices we use every day, like phones, automobiles, and smart home systems,” Gong said. “Our work helps these devices learn together, even when their internet connections are not perfect. That means smarter predictions, faster responses and better performance in real-world conditions.”

Existing federated learning methods often do not perform well when devices have unreliable connections or different computational capabilities, leading to slower training and less accurate models.

Gong’s research tackles this problem through a method described as quality-aware distributed computation. The new algorithms intelligently select which devices participate in each training round and adjust how much work each device does based on its connection quality and computational power.

“Our methods not only improve the learning accuracy of federated learning but also accelerate the training process while allowing devices to participate in federated learning with much flexibility, even if some devices drop in and out,” he said.

“Imagine your smart assistant learning new things 30% faster, or your car reacting more quickly to changing traffic. That’s the kind of improvement we’re seeing. This isn’t just about speed. It’s about making AI more responsive and reliable in everyday life.”



Source link

Continue Reading

Trending