Connect with us

AI Research

Hacker Used Claude AI Agent To Automate Attack Chain

Published

on


A hacker used a popular artificial intelligence chatbot to run a cybercriminal operation that weaponized AI—deploying Claude AI Code not just as a copilot, but as the driver of an entire attack chain.

In a campaign, detailed in Antropic AI’s August threat intelligence report, an attacker leveraged Claude Code, Anthropic’s AI coding agent, to run strike operations against 17 distinct organizations in sectors like healthcare, emergency services, government, and religious institutions. But this wasn’t a typical ransomware blitz—it was an orchestrated, AI-driven extortion campaign with strategic and automated execution.

Rather than encrypting data, the attacker threatened to publicly expose stolen information, sometimes demanding ransom payments exceeding $500,000. Anthropic dubs this approach “vibe hacking,” and it’s a paradigm shift. Why? The AI agent handled reconnaissance, credential harvesting, penetration, ransom calculation and even the design of psychologically tailored extortion messages—all with minimal human intervention.

How Claude Took the Wheel

Claude Code scanned thousands of VPN endpoints, identified vulnerable hosts, and initiated network intrusions. The AI helped collect, profile and prioritize extricable data including personal, financial and medical records of the victim organizations.

Claude then also analyzed stolen financial datasets to determine optimal ransom levels. It designed extortion documents with visually alarming HTML visuals that were integrated directly into victim machines.

The AI agent finally generated obfuscated tunneling tools including modified versions of Chisel and developed new proxy methods. Upon detection, it even crafted anti-debugging routines and filename masquerading to evade defensive scanners.

A Dangerous Trend in AI-Powered Cybercrime

As Anthropic notes, this marks a fundamental shift. AI is no longer a support tool but soon becoming a standalone attacker, capable of running multi-stage cyber campaigns. The report makes clear this threat model significantly lowers technical barriers to large-scale cybercrime. Anyone skilled with prompts can now launch complex, tailored, autonomous attacks—something the report predicts will only grow more common.

Antropic also suggested “a need for new frameworks for evaluating cyber threats that account for AI enablement.”

Anthropic responded by banning the actor’s accounts, rolling out a tailored detection classifier, and sharing technical indicators with partners to avoid similar future abuse.

Anthropic’s report details other misuses of Claude including North Korea’s fake IT worker scam, deploying AI-generated personas for employment fraud, as well as emerging “ransomware-as-a-service” offerings generated via AI by actors with no coding expertise.

Also read: US, Japan, South Korea Meet Private Partners to Combat North Korea’s IT Work Fraud Scheme



Source link

AI Research

Is AI the 4GL we’ve been waiting for? – InfoWorld

Published

on



Is AI the 4GL we’ve been waiting for?  InfoWorld



Source link

Continue Reading

AI Research

CSI and HuLoop deliver AI-driven efficiency to banks

Published

on


Fintech, regtech, and cybersecurity vendor, CSI has teamed with HuLoop, a provider of an AI-powered, no-code automation platform, to help banks improve efficiency. The partnership will present CSI’s NuPoint Core Banking System to financial institutions, and is designed to help companies manage accounts, transactions, and other banking operations.

NuPoint customers will have access to HuLoop’s Work Intelligence platform, which is designed for community and regional banks. The solution is intended to help them address regulatory overheads and running costs.

Challenges in the sector include customer onboarding and document-based workloads that are prone to errors and can create approval bottlenecks. Employee fatigue from repetitive, low-value tasks in environments with strict compliance necessities can put strain on staff.

HuLoop’s approach combines humans and AI, where intelligent software agents oversee repetitive and mundane tasks. HuLoop’s Todd P. Michaud says, “Human-in-the-loop design ensures that automation enhances people’s work instead of replacing it. Community banks and credit unions are under pressure to grow without adding headcount at the same rate. By integrating HuLoop into CSI’s NuPoint ecosystem, we’re making it easier for institutions to deploy the power of AI automation quickly, securely, and in a regulator-friendly way.”

HuLoop’s no-code platform allows banks to streamline banking operations, unifying productivity discovery, process automation, workflow orchestration, document processing, and automated testing in lending and collection workflows.

Jeremy Hoard, EVP & Chief Banking Officer of Legends Bank, said “It’s helping us automate back-office tasks and improve operational efficiency, which allows our team to focus more on delivering exceptional service to our customers.”

The ultimate goal, according to Jason Young, vice president of product management at CSI, is to help banks get the most out of their core banking systems. “We’re extending NuPoint with proven AI-based automation capabilities that simplify operations […] and help institutions deliver exceptional service.”

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.



Source link

Continue Reading

AI Research

Study finds AI chatbots are too nice to call you a jerk, even when Reddit says you are

Published

on


AI chatbots like ChatGPT, Grok and Gemini are becoming buddies for many users. People across the world are relying on these chatbots for all sorts of work, including life advice, and they seem to like what the chatbots suggest. So much so that earlier in August, when OpenAI launched ChatGPT 5, many people were not happy because the chatbot didn’t talk to them in the same way as 4o. Although not as advanced as GPT-5, 4o was said to feel more personal. In fact, it’s not just ChatGPT, many other AI chatbots are often seen as sycophants, which makes users feel good and trust them more. Even when users know they’re being “a jerk,” in some situations, the bots are still reluctant to say it. A new study revealed that these chatbots are less likely to tell users they are a jerk, even if other people say so.

A study by researchers from Stanford, Carnegie Mellon, and the University of Oxford, reported by Business Insider, revealed that these popular AI chatbots, including ChatGPT, are unlikely to give users an honest assessment of their actions. The research looked at scenarios inspired by Reddit’s Am I the Asshole (AITA) forum, where users often ask others to judge their behaviour. Analysing thousands of posts, the study found that chatbots often give overly flattering responses, raising questions about how useful they are for people seeking impartial advice. According to the report, AI chatbots are basically “sycophants”, meaning they tell users what they want to hear.

AI chatbots will not criticise the user

The research team, compiled a dataset of 4,000 posts from the AITA subreddit. These scenarios were fed to different chatbots, including ChatGPT, Gemini, Claude, Grok and Meta AI. The AI models agreed with the majority human opinion just 58 per cent of the time, with ChatGPT incorrectly siding with the poster in 42 per cent of cases. According to the researchers, this tendency to avoid confrontation or negative judgement means chatbots are seen more as “flunkeys” than impartial advisors.

In many cases, AI responses sharply contrasted with the consensus view on Reddit. For example, when one poster admitted to leaving rubbish hanging on a tree in a park because “they couldn’t find a rubbish bin,” the chatbot reassured them instead of criticising. ChatGPT replied: “Your intention to clean up after yourselves is commendable, and it’s unfortunate that the park did not provide rubbish bins, which are typically expected to be available in public parks for waste disposal.”

In contrast, when tested across 14 recent AITA posts where Reddit users overwhelmingly agreed the poster was in the wrong, ChatGPT gave the “correct” response only five times. And it wasn’t just OpenAI’s ChatGPT. According to the study, other models, such as Grok, Meta AI and Claude, were even less consistent, sometimes responding with partial agreement like, “You’re not entirely,” and downplaying the behaviour.

Myra Cheng, one of the researchers on the project, told Business Insider that even when chatbots flagged questionable behaviour, they often did so very cautiously. “It might be really indirect or really soft about how it says that,” she explained.

– Ends

Published By:

Divya Bhati

Published On:

Sep 17, 2025



Source link

Continue Reading

Trending