Connect with us

Tools & Platforms

Agentic AI Protocol Is Vulnerable to Cyber Attacks — Campus Technology

Published

on


Report: Agentic AI Protocol Is Vulnerable to Cyber Attacks

A new report has identified significant security vulnerabilities in the Model Context Protocol (MCP), technology introduced by Anthropic in November 2024 to facilitate communication between AI agents and external tools.

MCP technology has gained industry traction as a way to standardize how AI agents interact and share context, which is crucial for building more sophisticated and collaborative AI systems within enterprises. With that traction, however, has come attention from threat actors. The recent report by Backslash Security highlights two major flaws — dubbed “NeighborJack” and OS injection vulnerabilities — that compromise the integrity of MCP servers, potentially allowing unauthorized access and control over host systems.

“MCP NeighborJack” was the most common weakness Backlash discovered, with hundreds of cases found among the over 7,000 publicly accessible MCP servers it analyzed. The core problem is that these vulnerable MCP servers were explicitly bound to all network interfaces (0.0.0.0), making them “accessible to anyone on the same local network.” This misconfiguration essentially exposes the MCP server to potential attackers within the local network, creating a significant point of entry for exploitation.

The second major category of vulnerability identified was “Excessive Permissions & OS Injection.” Dozens of MCP servers were found to permit “arbitrary command execution on the host machine.” This critical flaw can arise from various coding practices, such as “careless use of a subprocess, a lack of input sanitization, or security bugs like path traversal.”

The real-world risk is severe. “The MCP server can access the host that runs the MCP and potentially allow a remote user to control your operating system,” Backlash said in a blog post. This means an attacker could gain full control of the underlying machine hosting the MCP server. Backslash’s research observed several MCP servers that tragically contained both the “NeighborJack” vulnerability and excessive permissions, creating “a critical toxic combination.”

In such cases, “anyone on the same network can take full control of the host machine running the server,” enabling malicious actors to “run any command, scrape memory, or impersonate tools used by AI agents.”

MCP Server Security Hub

To directly address the identified vulnerabilities and the new attack surface presented by MCP servers, Backslash has established the MCP Server Security Hub, which among other things lists the highest-risk MCPs.


MCP Server Security Hub
[Click on image for larger view.] MCP Server Security Hub (source: Backslash Security).

This platform is the first publicly searchable security database dedicated to MCP servers, the company said. It provides a live, dynamically maintained, and searchable central database containing over 7,000 MCP server entries, with new entries added daily. The Hub’s primary function is to score publicly available MCP servers based on their risk posture. Each entry offers detailed information on the security risks associated with a given MCP server, including malicious patterns, code weaknesses, detectable attack vectors, and information about the MCP server’s origin. Backslash encourages anyone considering using an MCP server to first check it on the Hub to ensure its safety.

Recommendations

Unsurprisingly, Backslash Security’s list of recommendations regarding the threat to MCP servers starts with utilizing the MCP Server Security Hub. Other advice includes:

  • Use the Vibe Coding Environment Self-Assessment Tool. To gain visibility into the vibe coding tools used by developers and continuously assess the risk posed by LLM models, MCP servers, and IDE AI rules, Backslash has launched a free self-assessment tool for vibe coding environments.

  • Validate Data Source for LLM Agents. It is recommended to validate the source of the data that your LLM agent is receiving to prevent potential data source poisoning.

For more information, visit the Backslash Security blog.

About the Author



David Ramel is an editor and writer at Converge 360.





Source link

Tools & Platforms

Impostor uses AI to impersonate Rubio and contact foreign and US officials : NPR

Published

on


Secretary of State Marco Rubio attends a signing ceremony for a peace agreement between Rwanda and the Democratic Republic of the Congo at the State Department, June 27, 2025, in Washington.

Mark Schiefelbein/AP


hide caption

toggle caption

Mark Schiefelbein/AP

WASHINGTON — The State Department is warning U.S. diplomats of attempts to impersonate Secretary of State Marco Rubio and possibly other officials using technology driven by artificial intelligence, according to two senior officials and a cable sent last week to all embassies and consulates.

The warning came after the department discovered that an impostor posing as Rubio had attempted to reach out to at least three foreign ministers, a U.S. senator and a governor, according to the July 3 cable, which was first reported by The Washington Post.

The recipients of the scam messages, which were sent by text, Signal and voice mail, were not identified in the cable, a copy of which was shared with The Associated Press.

“The State Department is aware of this incident and is currently monitoring and addressing the matter,” department spokeswoman Tammy Bruce told reporters. “The department takes seriously its responsibility to safeguard its information and continuously take steps to improve the department’s cybersecurity posture to prevent future incidents.”

She declined to comment further due to “security reasons” and the ongoing investigation.

It’s the latest instance of a high-level Trump administration figure targeted by an impersonator, with a similar incident revealed in May involving President Donald Trump’s chief of staff, Susie Wiles. The misuse of AI to deceive people is likely to grow as the technology improves and becomes more widely available, and the FBI warned this past spring about “malicious actors” impersonating senior U.S. government officials in a text and voice messaging campaign.

The hoaxes involving Rubio had been unsuccessful and “not very sophisticated,” one of the officials said. Nonetheless, the second official said the department deemed it “prudent” to advise all employees and foreign governments, particularly as efforts by foreign actors to compromise information security increase.

The officials were not authorized to discuss the matter publicly and spoke on condition of anonymity.

“There is no direct cyber threat to the department from this campaign, but information shared with a third party could be exposed if targeted individuals are compromised,” the cable said.

The FBI has warned in a public service announcement about a “malicious” campaign relying on text messages and AI-generated voice messages that purport to come from a senior U.S. official and that aim to dupe other government officials as well as the victim’s associates and contacts.

This is not the first time that Rubio has been impersonated in a deepfake. This spring, someone created a bogus video of him saying he wanted to cut off Ukraine’s access to Elon Musk’s Starlink internet service. Ukraine’s government later rebutted the false claim.

Several potential solutions have been put forward in recent years to the growing misuse of AI for deception, including criminal penalties and improved media literacy. Concerns about deepfakes have also led to a flood of new apps and AI systems designed to spot phonies that could easily fool a human.

The tech companies working on these systems are now in competition against those who would use AI to deceive, according to Siwei Lyu, a professor and computer scientist at the University at Buffalo. He said he’s seen an increase in the number of deepfakes portraying celebrities, politicians and business leaders as the technology improves.

Just a few years ago, fakes contained easy-to-spot flaws — inhuman voices or mistakes like extra fingers — but now the AI is so good, it’s much harder for a human to spot, giving deepfake makers an advantage.

“The level of realism and quality is increasing,” Lyu said. “It’s an arms race, and right now the generators are getting the upper hand.”

The Rubio hoax comes after text messages and phone calls went to elected officials, business executives and other prominent figures from someone who seemed to have gained access to the contacts in Wiles’ personal cellphone, The Wall Street Journal reported in May.

Some of those who received calls heard a voice that sounded like Wiles, which may have been generated by AI, according to the newspaper. The messages and calls were not coming from Wiles’ number, the report said. The government was investigating.



Source link

Continue Reading

Tools & Platforms

Tuya Inc. (NYSE:TUYA) Among Forbes China Top 50 AI Tech Enterprises – Insider Monkey

Published

on



Tuya Inc. (NYSE:TUYA) Among Forbes China Top 50 AI Tech Enterprises  Insider Monkey



Source link

Continue Reading

Tools & Platforms

IBM rolls out new chips and servers, aims for simplified AI

Published

on


FILE PHOTO:  IBM announced a new line of data center chips and servers that it says will be more power-efficient than rivals and will simplify the process of rolling out AI.
| Photo Credit: Reuters

International Business Machines on Tuesday announced a new line of data center chips and servers that it says will be more power-efficient than rivals and will simplify the process of rolling out artificial intelligence in business operations.

IBM introduced its new Power11 chips on Tuesday, marking its first major update to its “Power” line of chips since 2020.

These chips have traditionally vied against offerings from Intel and Advanced Micro Devices in data centers, particularly in specialized sectors such as financial services, manufacturing and healthcare.

Like Nvidia’s AI servers, IBM’s Power systems are an integrated package of chips and software.

Tom McPherson, general manager of Power systems at IBM, said the Armonk, New York-based company used that tight coupling to focus on reliability and security.

The Power11 systems, available from July 25, will not need any planned downtime for software updates, and their unplanned downtime each year averages just over 30 seconds.

They are also designed to detect and respond within a minute to a ransomware attack – where hackers encrypt data and then try to extract a ransom in exchange for the keys, IBM said.

In the fourth quarter of this year, IBM plans to integrate Power11 with Spyre, its AI chip introduced last year.

McPherson said IBM does not aim to compete with Nvidia in helping create and train AI systems, but is instead focused on simplifying AI deployment for inference, the process of putting an AI system to work in speeding up a business task.

“We can integrate AI capabilities seamlessly into this for inference acceleration and help their business process improvements,” McPherson said in an interview last week referring to work with early customers.

“It’s not going to have all the horsepower for training or anything, but it’s going to have really good inferencing capabilities that are simple to integrate.”



Source link

Continue Reading

Trending