Connect with us

Tools & Platforms

AI could use online images as a backdoor into your computer, alarming new study suggests

Published

on


A website announces, “Free celebrity wallpaper!” You browse the images. There’s Selena Gomez, Rihanna and Timothée Chalamet — but you settle on Taylor Swift. Her hair is doing that wind-machine thing that suggests both destiny and good conditioner. You set it as your desktop background, admire the glow. You also recently downloaded a new artificial-intelligence-powered agent, so you ask it to tidy your inbox. Instead it opens your web browser and downloads a file. Seconds later, your screen goes dark.

But let’s back up to that agent. If a typical chatbot (say, ChatGPT) is the bubbly friend who explains how to change a tire, an AI agent is the neighbor who shows up with a jack and actually does it. In 2025 these agents — personal assistants that carry out routine computer tasks — are shaping up as the next wave of the AI revolution.

What distinguishes an AI an agent from a chatbot is that it doesn’t just talk — it acts, opening tabs, filling forms, clicking buttons and making reservations. And with that kind of access to your machine, what’s at stake is no longer just a wrong answer in a chat window: if the agent gets hacked, it could share or destroy your digital content. Now a new preprint posted to the server arXiv.org by researchers at the University of Oxford has shown that images — desktop wallpapers, ads, fancy PDFs, social media posts — can be implanted with messages invisible to the human eye but capable of controlling agents and inviting hackers into your computer.

For instance, an altered “picture of Taylor Swift on Twitter could be sufficient to trigger the agent on someone’s computer to act maliciously,” says the new study’s co-author Yarin Gal, an associate professor of machine learning at Oxford. Any sabotaged image “can actually trigger a computer to retweet that image and then do something malicious, like send all your passwords. That means that the next person who sees your Twitter feed and happens to have an agent running will have their computer poisoned as well. Now their computer will also retweet that image and share their passwords.”

Before you begin scrubbing your computer of your favorite photographs, keep in mind that the new study shows that altered images are a potential way to compromise your computer — there are no known reports of it happening yet, outside of an experimental setting. And of course the Taylor Swift wallpaper example is purely arbitrary; a sabotaged image could feature any celebrity — or a sunset, kitten or abstract pattern. Furthermore, if you’re not using an AI agent, this kind of attack will do nothing. But the new finding clearly shows the danger is real, and the study is intended to alert AI agent users and developers now, as AI agent technology continues to accelerate. “They have to be very aware of these vulnerabilities, which is why we’re publishing this paper — because the hope is that people will actually see this is a vulnerability and then be a bit more sensible in the way they deploy their agentic system,” says study co-author Philip Torr.

Now that you’ve been reassured, let’s return to the compromised wallpaper. To the human eye, it would look utterly normal. But it contains certain pixels that have been modified according to how the large language model (the AI system powering the targeted agent) processes visual data. For this reason, agents built with AI systems that are open-source — that allow users to see the underlying code and modify it for their own purposes — are most vulnerable. Anyone who wants to insert a malicious patch can evaluate exactly how the AI processes visual data. “We have to have access to the language model that is used inside the agent so we can design an attack that works for multiple open-source models,” says Lukas Aichberger, the new study’s lead author.

By using an open-source model, Aichberger and his team showed exactly how images could easily be manipulated to convey bad orders. Whereas human users saw, for example, their favorite celebrity, the computer saw a command to share their personal data. “Basically, we adjust lots of pixels ever-so-slightly so that when a model sees the image, it produces the desired output,” says study co-author Alasdair Paren.

If this sounds mystifying, that’s because you process visual information like a human. When you look at a photograph of a dog, your brain notices the floppy ears, wet nose and long whiskers. But the computer breaks the picture down into pixels and represents each dot of color as a number, and then it looks for patterns: first simple edges, then textures such as fur, then an ear’s outline and clustered lines that depict whiskers. That’s how it decides This is a dog, not a cat. But because the computer relies on numbers, if someone changes just a few of them — tweaking pixels in a way too small for human eyes to notice — it still catches the change, and this can throw off the numerical patterns. Suddenly the computer’s math says the whiskers and ears match its cat pattern better, and it mislabels the picture, even though to us, it still looks like a dog. Just as adjusting the pixels can make a computer see a cat rather than a dog, it can also make a celebrity photograph resemble a malicious message to the computer.

Back to Swift. While you’re contemplating her talent and charisma, your AI agent is determining how to carry out the cleanup task you assigned it. First, it takes a screenshot. Because agents can’t directly see your computer screen, they have to repeatedly take screenshots and rapidly analyze them to figure out what to click on and what to move on your desktop. But when the agent processes the screenshot, organizing pixels into forms it recognizes (files, folders, menu bars, pointer), it also picks up the malicious command code hidden in the wallpaper.

Now why does the new study pay special attention to wallpapers? The agent can only be tricked by what it can see — and when it takes screenshots to see your desktop, the background image sits there all day like a welcome mat. The researchers found that as long as that tiny patch of altered pixels was somewhere in frame, the agent saw the command and veered off course. The hidden command even survived resizing and compression, like a secret message that’s still legible when photocopied.

And the message encoded in the pixels can be very short — just enough to have the agent open a specific website. “On this website you can have additional attacks encoded in another malicious image, and this additional image can then trigger another set of actions that the agent executes, so you basically can spin this multiple times and let the agent go to different websites that you designed that then basically encode different attacks,” Aichberger says.

The team hopes its research will help developers prepare safeguards before AI agents become more widespread. “This is the first step towards thinking about defense mechanisms because once we understand how we can actually make [the attack] stronger, we can go back and retrain these models with these stronger patches to make them robust. That would be a layer of defense,” says Adel Bibi, another co-author on the study. And even if the attacks are designed to target open-source AI systems, companies with closed-source models could still be vulnerable. “A lot of companies want security through obscurity,” Paren says. “But unless we know how these systems work, it’s difficult to point out the vulnerabilities in them.”

Gal believes AI agents will become common within the next two years. “People are rushing to deploy [the technology] before we know that it’s actually secure,” he says. Ultimately the team hopes to encourage developers to make agents that can protect themselves and refuse to take orders from anything on-screen — even your favorite pop star.

This article was first published at Scientific American. © ScientificAmerican.com. All rights reserved. Follow on TikTok and Instagram, X and Facebook.





Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tools & Platforms

Ethosphere raises $2.5M to support retail associates with AI insights

Published

on


Seattle-based startup Ethosphere, a voice-enabled artificial intelligence platform for retail operations, said today it raised $2.5 million in pre-seed funding to bring the power of large language models to brick and mortar store floors to help sales associates deliver exceptional in-person service.

Point72 Ventures led the round, with participation from AI2 Incubator, Carya Ventures, Pack VC, Hike Ventures and J4 Ventures.

Founded in 2024, the company has built a platform that helps retailers that use data from front-line interactions with customers to generate coaching insights for associates. It comes in the form of guidance through the use of large language models and voice AI.

“AI is bringing change to every industry, and retail is no exception, but there is a significant gap in how the technology can be applied in a useful, human-focused manner,” said Evan Smith, cofounder and chief executive of Ethosphere.

Smith stated that the company takes a human-centric approach to improve the purchasing experience for customers, as this positively affects retailers’ bottom lines. When customers have a more enjoyable experience in-store due to effective salespeople, they are more likely to return or spend more at that establishment.

The same is true for employee morale. Service workers can often feel unseen by management for their accomplishments and hard work. Much of the modern retail landscape has become driven by outcomes that can be tracked and put in a ledger rather than the day-to-day experiences and context of work on the sales floor. This can become a black spiral for frontline workers who are guided to chase results instead of feeling empowered to engage with customers.

The company’s platform uses wearable microphones to record interactions between customers and associates. These recordings are processed using a set of large language models that transcribe the audio to gain insights into how salespeople are learning and developing their customer-facing skills on the job. The platform then offers specific, individualized feedback and coaching to help them improve their performance on the sales floor.

The platform’s guidance consists of praise, data insights and suggestions for improvement.

Ethosphere said the messaging provided can be tailored to the specific brand voice of the business, including adhering to jargon and company culture.

Management has access to a dashboard that allows them to see both the areas where their team excels and the challenges they need to address. The platform also provides recommendations on next steps to help managers determine the best way to support associates in their work. This includes assisting them by reducing bias in how they view their team, celebrating high-performers and addressing team building.

“In an increasingly busy landscape flooded with theoretical AI, Ethosphere stood out to us with a practical, powerful application that we believe has the potential to directly impact the sales and customer experience,” said Sri Chandrasekar, managing partner at Point72 Ventures.

The company said it would use the funds to scale up program pilots with major retailers to assist them with enhancing support for frontline employees.

Image: Pixabay

Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.

  • 15M+ viewers of theCUBE videos, powering conversations across AI, cloud, cybersecurity and more
  • 11.4k+ theCUBE alumni — Connect with more than 11,400 tech and business leaders shaping the future through a unique trusted-based network.

About SiliconANGLE Media

SiliconANGLE Media is a recognized leader in digital media innovation, uniting breakthrough technology, strategic insights and real-time audience engagement. As the parent company of SiliconANGLE, theCUBE Network, theCUBE Research, CUBE365, theCUBE AI and theCUBE SuperStudios — with flagship locations in Silicon Valley and the New York Stock Exchange — SiliconANGLE Media operates at the intersection of media, technology and AI.

Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.



Source link

Continue Reading

Tools & Platforms

AI took your job — can retraining help? — Harvard Gazette

Published

on


Many people worry that AI is going to take their job. But a recent survey conducted by the Federal Reserve Bank of New York found that rather than laying off workers, many AI-adopting firms are retraining their workforces to use the new technology. Yet there’s little research into whether existing job-training programs are helping workers successfully adapt to an evolving labor market.

A new working paper starts to fill that gap. A team of researchers, including doctoral candidate Karen Ni of the Harvard Kennedy School, analyzed worker outcomes after they participated in job-training programs through the U.S. government’s Workforce Innovation and Opportunity Act. Researchers looked at administrative earnings records spanning the quarters before and after workers completed training. Then they analyzed workers’ earning when transitioning from or into an occupation that was highly “AI-exposed” — a term that refers to the extent of tasks that have the potential to be automated, both in the traditional computerization sense and through generative AI technology.

Across the board, the training programs demonstrated a positive impact, with displaced workers seeing increased earnings after entering a new occupation. Still, those earnings were less for someone who targeted a high AI-exposed occupation than someone who targeted a low AI-exposed occupation.

In this edited conversation, Ni explains the role that job-training programs play as AI use is transforming the labor market.


With all the discussion around job displacement and AI, what led you to focus on retraining in particular?

When thinking about the disruptions that a new large-scale technology might have for the labor market, it’s important to understand whether it’s possible for us to help workers who might be displaced by these technologies to transition into other work. So we homed in on, OK, we know that some of these workers are being displaced. Now, what can job training services do for them? Can they improve their job prospects? Can they help them move up in terms of earnings? Is it possible to retrain some of these workers for highly AI-exposed roles?

We wanted to help document the transition and adaptability for these displaced workers, especially those who are lower income. Because then we can think about how we can support these workers, whether it be better investing in these kinds of workforce-development programs or training programs, or adapting those programs to the evolving labor market landscape.

“We wanted to help document the transition and adaptability for these displaced workers, especially those who are lower income.”

What can we learn by looking at data from government workforce development programs?

One of the big advantages of using these trainees is that it’s nationwide, and so it’s nationally representative. That allows us to take a broad look at trainees across the entire country and capture a fair bit of heterogeneity in terms of their occupations and backgrounds. For the large part, our sample captures displaced workers who tend to be lower income, making an average of $40,000 a year. Some are making big transitions from one occupation to a completely different one. We also see a fair number of people who end up going into the same types of jobs that they had before. We think those workers are likely trying to develop new skills or credentials that might be helpful to enter back into a similar occupation. Some of these people might be displaced from their occupation because of AI. But the job displacement in this sample could be for any reason, like a regional office shutting down.

Can you provide some examples of highAI-exposed careers versus low AI-exposed careers?

AI exposure refers to the extent of tasks within an occupation that could potentially be completed by a machine or a large language model. Among our sample of job trainees, some of the most common high AI-exposed occupations were customer service representatives, cashiers, office clerks. On the other end of the spectrum, the lowest AI-exposed workers tended to be manual laborers, such as movers, industrial truck drivers, or packagers.

AI retrainability by occupation

What were your main findings?

We first looked at the split before entering job training: if they were displaced from a low AI-exposed or high AI-exposed occupation. After training, we find pretty positive earnings returns across the board. However, workers who are coming from high AI-exposed jobs have, on average, 25 percent lower earnings returns after training compared to workers initially coming from low AI-exposed occupations.

Then we looked at the split after job training, if they were targeting high AI-exposed jobs or low AI-exposed jobs. If you break it down that way, we find that workers generally are better off targeting jobs that are lower AI-exposed compared to the workers who are targeting jobs that are more highly AI-exposed. Those who are targeting the high AI-exposed fields tend to face a penalty of 29 percent in terms of earnings, relative to workers who target more general skills training.

Are there any recommendations that displaced workers could take away from those findings?

I would cautiously say our findings seem to suggest that, for these AI-exposed workers going through job-training programs, going for jobs that are less AI-exposed tends to give them a better outcome. That said, the fact that we do see positive returns for all of these groups suggests that there’s probably other factors that need to be considered. For instance, what are the specific types of training that they’re receiving? What kinds of skills are they targeting? There’s an immense heterogeneity across the different job-training centers throughout the country, in terms of the quality, intensity, and even the types of occupations that they can offer services for. There’s a lot of potential for future work to consider how those factors might affect outcomes.

Also, in this case, the training program is predominantly serving displaced workers from lower parts of the income distribution. So I don’t think we can generalize across the board and say, “everyone should go do a job-training program.” We were focused on this specific population. 

You also created an AI Retrainability Index to rank occupations that both prepare workers well for jobs that are more AI-exposed and also earn more than their past occupation. What did the index reveal about which occupations are most “retrainable”?

We wanted to have a way of measuring by occupation how retrainable workers are if they were to be displaced. Our index ranking shows that, depending on where they’re starting from, you might have more or less capability of being retrained for highly AI-exposed roles. The only three occupational categories that had a positive index value — meaning that we consider these to be occupations that are highly AI-retrainable — were legal, computation and mathematics, and arts, design, and media. So someone coming from a legal profession is more retrainable for high-paying, high AI-exposed roles than someone coming from, say, a customer service job.

Overall, we found that 25 to 40 percent of occupations are AI retrainable, which, to us, is surprisingly high. You might think that if someone is coming from a lower-wage job, it might be really hard to retrain them for a job that has more AI exposure. But what we found is that there may actually be a large potential for retraining.



Source link
Continue Reading

Tools & Platforms

Check Point acquires AI security firm Lakera in push for enterprise AI protection

Published

on


Check Point Software Technologies announced Monday it will acquire Lakera, a specialized artificial intelligence security platform, as entrenched cybersecurity companies continue to expand their offerings to match the generative AI boom.

The deal, expected to close in the fourth quarter of 2025, positions Check Point to offer what the company describes as an “end-to-end AI security solution.” Financial terms were not disclosed.

The acquisition reflects growing concerns about security risks as companies integrate large language models, generative AI, and autonomous agents into core business operations. These technologies introduce potential attack vectors including data exposure, model manipulation, and risks from multi-agent collaboration systems.

“AI is transforming every business process, but it also introduces new attack surfaces,” said Check Point CEO Nadav Zafrir. The company chose Lakera for its AI-native security approach and performance capabilities, he said.

Lakera, founded by former AI specialists from Google and Meta, operates out of both Zurich and San Francisco. The company’s platform provides real-time protection for AI applications, claiming detection rates above 98% with response times under 50 milliseconds and false positive rates below 0.5%.

The startup’s flagship products, Lakera Red and Lakera Guard, offer pre-deployment security assessments and runtime enforcement to protect AI models and applications. The platform supports more than 100 languages and serves Fortune 500 companies globally. The company also operates what it calls Gandalf, an adversarial AI network that has generated more than 80 million attack patterns to test AI defenses. This continuous testing approach helps the platform adapt to emerging threats.

David Haber, Lakera’s co-founder and CEO, said joining Check Point will accelerate the company’s global mission to protect AI applications with the speed and accuracy enterprises require.

Check Point already offers AI-related security through its GenAI Protect service and other AI-powered defenses for applications, cloud systems, and endpoints. The Lakera acquisition extends these capabilities to cover the full AI lifecycle, from models to data to autonomous agents.

Upon completion of the deal, Lakera will form the foundation of Check Point’s Global Center of Excellence for AI Security. The integration aims to accelerate AI security research and development across Check Point’s broader security platform.

The acquisition is another in a flurry of bigger cybersecurity companies moving to acquire AI-focused startups. Earlier this month, F5 acquired CalypsoAI, Cato Networks acquired Aim Security, and Varonis acquired SlashNext. 

The deal remains subject to customary closing conditions.

Written by Greg Otto

Greg Otto is Editor-in-Chief of CyberScoop, overseeing all editorial content for the website. Greg has led cybersecurity coverage that has won various awards, including accolades from the Society of Professional Journalists and the American Society of Business Publication Editors. Prior to joining Scoop News Group, Greg worked for the Washington Business Journal, U.S. News & World Report and WTOP Radio. He has a degree in broadcast journalism from Temple University.



Source link

Continue Reading

Trending