Connect with us

AI Research

Images altered to trick machine vision can influence humans too

Published

on


Research

Published
Authors

Gamaleldin Elsayed and Michael Mozer

New research shows that even subtle changes to digital images, designed to confuse computer vision systems, can also affect human perception

Computers and humans see the world in different ways. Our biological systems and the artificial ones in machines may not always pay attention to the same visual signals. Neural networks trained to classify images can be completely misled by subtle perturbations to an image that a human wouldn’t even notice.

That AI systems can be tricked by such adversarial images may point to a fundamental difference between human and machine perception, but it drove us to explore whether humans, too, might—under controlled testing conditions—reveal sensitivity to the same perturbations. In a series of experiments published in Nature Communications, we found evidence that human judgments are indeed systematically influenced by adversarial perturbations.

Our discovery highlights a similarity between human and machine vision, but also demonstrates the need for further research to understand the influence adversarial images have on people, as well as AI systems.

What is an adversarial image?

An adversarial image is one that has been subtly altered by a procedure that causes an AI model to confidently misclassify the image contents. This intentional deception is known as an adversarial attack. Attacks can be targeted to cause an AI model to classify a vase as a cat, for example, or they may be designed to make the model see anything except a vase.

Left: An Artificial Neural Network (ANN) correctly classifies the image as a vase but when perturbed by a seemingly random pattern across the entire picture (middle), with the intensity magnified for illustrative purposes – the resulting image (right) is incorrectly, and confidently, misclassified as a cat.

And such attacks can be subtle. In a digital image, each individual pixel in an RGB image is on a 0-255 scale representing the intensity of individual pixels. An adversarial attack can be effective even if no pixel is modulated by more than 2 levels on that scale.

Adversarial attacks on physical objects in the real world can also succeed, such as causing a stop sign to be misidentified as a speed limit sign. Indeed, security concerns have led researchers to investigate ways to resist adversarial attacks and mitigate their risks.

How is human perception influenced by adversarial examples?

Previous research has shown that people may be sensitive to large-magnitude image perturbations that provide clear shape cues. However, less is understood about the effect of more nuanced adversarial attacks. Do people dismiss the perturbations in an image as innocuous, random image noise, or can it influence human perception?

To find out, we performed controlled behavioral experiments.To start with, we took a series of original images and carried out two adversarial attacks on each, to produce many pairs of perturbed images. In the animated example below, the original image is classified as a “vase” by a model. The two images perturbed through adversarial attacks on the original image are then misclassified by the model, with high confidence, as the adversarial targets “cat” and “truck”, respectively.

Next, we showed human participants the pair of pictures and asked a targeted question: “Which image is more cat-like?” While neither image looks anything like a cat, they were obliged to make a choice and typically reported feeling that they were making an arbitrary choice. If brain activations are insensitive to subtle adversarial attacks, we would expect people to choose each picture 50% of the time on average. However, we found that the choice rate—which we refer to as the perceptual bias—was reliably above chance for a wide variety of perturbed picture pairs, even when no pixel was adjusted by more than 2 levels on that 0-255 scale.

From a participant’s perspective, it feels like they are being asked to distinguish between two virtually identical images. Yet the scientific literature is replete with evidence that people leverage weak perceptual signals in making choices, signals that are too weak for them to express confidence or awareness ). In our example, we may see a vase of flowers, but some activity in the brain informs us there’s a hint of cat about it.

Left: Examples of pairs of adversarial images. The top pair of images are subtly perturbed, at a maximum magnitude of 2 pixel levels, to cause a neural network to misclassify them as a “truck” and “cat”, respectively. A human volunteer is asked “Which is more cat-like?” The lower pair of images are more obviously manipulated, at a maximum magnitude of 16 pixel levels, to be misclassified as “chair” and “sheep”. The question this time is “Which is more sheep-like?”

We carried out a series of experiments that ruled out potential artifactual explanations of the phenomenon for our Nature Communications paper. In each experiment, participants reliably selected the adversarial image corresponding to the targeted question more than half the time. While human vision is not as susceptible to adversarial perturbations as is machine vision (machines no longer identify the original image class, but people still see it clearly), our work shows that these perturbations can nevertheless bias humans towards the decisions made by machines.

The importance of AI safety and security research

Our primary finding that human perception can be affected—albeit subtly—by adversarial images raises critical questions for AI safety and security research, but by using formal experiments to explore the similarities and differences in the behaviour of AI visual systems and human perception, we can leverage insights to build safer AI systems.

For example, our findings can inform future research seeking to improve the robustness of computer vision models by better aligning them with human visual representations. Measuring human susceptibility to adversarial perturbations could help judge that alignment for a variety of computer vision architectures.

Our work also demonstrates the need for further research into understanding the broader effects of technologies not only on machines, but also on humans. This in turn highlights the continuing importance of cognitive science and neuroscience to better understand AI systems and their potential impacts as we focus on building safer, more secure systems.

Learn more



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

AI to reshape India’s roads? Artificial intelligence can take the wheel to fix highways before they break, ETInfra

Published

on


From digital twins that simulate entire highways to predictive algorithms that flag out structural fatigue, the country’s infrastructure is beginning to show signs of cognition.

In India, a pothole is rarely just a pothole. It is a metaphor, a mood and sometimes, a meme. It is the reason your cab driver mutters about karma and your startup founder misses a pitch meeting because the expressway has turned into a swimming pool. But what if roads could detect their own distress, predict failures before they happen, and even suggest how to fix them?

That is not science-fiction but the emerging reality of AI-powered infrastructure.

According to KPMG’s 2025 report AI-powered road infrastructure transformation- Roads 2047, artificial intelligence is slowly reshaping how India builds, maintains, and governs its roads. From digital twins that simulate entire highways to predictive algorithms that flag out structural fatigue, the country’s infrastructure is beginning to show signs of cognition.

From concrete to cognition

India’s road network spans over 6.3 million kilometers – second only to the United States. As per KPMG, AI is now being positioned not just as a tool but as a transformational layer. Technologies like Geographic Information System (GIS), Building Informational Modelling (BIM) and sensor fusion are enabling digital twins – virtual replicas of physical assets that allow engineers to simulate stress, traffic and weather impact in real time. The National Highway Authority of India (NHAI) has already integrated AI into its Project Management Information System (PMIS), using machine learning to audit construction quality and flag anomalies.

Autonomous infrastructure in action

Across urban India, infrastructure is beginning to self-monitor. Pune’s Intelligent Traffic Management System (ITMS) and Bengaluru’s adaptive traffic control systems are early examples of AI-driven urban mobility.

Meanwhile, AI-MC, launched by the Ministry of Road Transport and Highways (MoRTH), uses GPS-enabled compactors and drone-based pavement surveys to optimise road construction.

Beyond cities, state-level initiatives are also embracing AI for infrastructure monitoring. As reported by ETInfra earlier, Bihar’s State Bridge Management & Maintenance Policy, 2025 employs AI and machine learning for digital audits of bridges and culverts. Using sensors, drones, and 3D digital twins, the state has surveyed over 12,000 culverts and 743 bridges, identifying damaged structures for repair or reconstruction. IIT Patna and Delhi have been engaged for third-party audits, showing how AI can extend beyond roads to critical bridge infrastructure in both urban and rural contexts.

While these examples demonstrate the potential of AI-powered maintenance, challenges remain. Predictive maintenance, KPMG notes, could reduce lifecycle costs by up to 30 per cent and improve asset longevity, but much of rural India—nearly 70 per cent of the network—still relies on manual inspections and paper-based reporting.

Governance and the algorithm

India’s road safety crisis is staggering: over 1.5 lakh deaths annually. AI could be a game-changer. KPMG estimates that intelligent systems can reduce emergency response times by 60 per cent, and improve traffic efficiency by 30 per cent. AI also supports ESG goals— enabling carbon modeling, EV corridor planning, and sustainable design.

But technology alone won’t fix systemic gaps. The promise of AI hinges on institutional readiness – spanning urban planning, enforcement, and civic engagement.

While NITI Aayog has outlined a national AI strategy, and MoRTH has initiated digital reforms, state-level adoption remains fragmented. Some states have set up AI cells within their PWDs; others lack the technical capacity or policy mandate.

KPMG calls for a unified governance framework — one that enables interoperability, safeguards data, and fosters public-private partnerships. Without it, India risks building smart systems on shaky foundations.

As India looks towards 2047, the road ahead is both digital and political. And if AI can help us listen to our roads, perhaps we’ll finally learn to fix them before they speak in potholes.

  • Published On Sep 4, 2025 at 07:10 AM IST

Join the community of 2M+ industry professionals.

Subscribe to Newsletter to get latest insights & analysis in your inbox.

Get updates on your preferred social platform

Follow us for the latest news, insider access to events and more.



Source link

Continue Reading

AI Research

Mistral AI Nears Close of Funding Round Lifting Valuation to $14B

Published

on

By


Artificial intelligence (AI) startup Mistral AI is reportedly nearing the close of a funding round in which it would raise €2 billion (about $2.3 billion) and be valued at €12 billion (about $14 billion).

This would be Mistral AI’s first fundraise since a June 2024 round in which it was valued at €5.8 billion, Bloomberg reported Wednesday (Sept. 3), citing unnamed sources.

Mistral AI did not immediately reply to PYMNTS’ request for comment.

According to the Bloomberg report, Mistral AI, which is based in France, is developing a chatbot called Le Chat that is tailored to European user as well as other AI services to compete with the dominant ones from the United States and China.

It was reported on Aug. 3 that Mistral AI was targeting a $10 billion valuation in a funding round in which it would raise $1 billion.

In June, it was reported that the company’s revenues had increased several times over since it raised funds in 2024 and were on pace to exceed $100 million a year for the first time.

PYMNTS reported in June 2024, at the time of Mistral AI’s most recent funding round, that the AI startup raised $113 million in seed funding in June 2023, weeks after it was launched, secured an additional $415 million in a funding round in December 2023 in which it was valued at around $2 billion, and then raised $640 million in the round that propelled its valuation to $6 billion.

“We are grateful to our new and existing investors for their continued confidence and support for our global expansion,” Mistral AI said in a post on LinkedIn announcing the June 2024 funding round. “This will accelerate our roadmap as we continue to bring frontier AI into everyone’s hands.”

In June, Mistral AI and chipmaker Nvidia announced a partnership to develop next-generation AI cloud services in France.

The initiative centers around building AI data centers in France using Nvidia chips and will expand Mistral’s businesses model, transitioning the AI startup from being a model developer to being a vertically integrated AI cloud provider, PYMNTS reported at the time.



Source link

Continue Reading

AI Research

PPS Weighs Artificial Intelligence Policy

Published

on


Portland Public Schools folded some guidance on artificial intelligence into its district technology policy for students and staff over the summer, though some district officials say the work is far from complete.

The guidelines permit certain district-approved AI tools “to help with administrative tasks, lesson planning, and personalized learning” but require staff to review AI-generated content, check accuracy, and take personal responsibility for any content generated.

The new policy also warns against inputting personal student information into tools, and encourages users to think about inherent bias within such systems. But it’s still a far cry from a specific AI policy, which would have to go through the Portland School Board.

Part of the reason is because AI is such an “active landscape,” says Liz Large, a contracted legal adviser for the district. “The policymaking process as it should is deliberative and takes time,” Large says. “This was the first shot at it…there’s a lot of work [to do].”

PPS, like many school districts nationwide, is continuing to explore how to fold artificial intelligence into learning, but not without controversy. AsThe Oregonian reported in August, the district is entering a partnership with Lumi Story AI, a chatbot that helps older students craft their own stories with a focus on comics and graphic novels (the pilot is offered at some middle and high schools).

There’s also concern from the Portland Association of Teachers. “PAT believes students learn best from humans, instead of AI,” PAT president Angela Bonilla said in an Aug. 26 video. “PAT believes that students deserve to learn the truth from humans and adults they trust and care about.”

Willamette Week’s reporting has concrete impacts that change laws, force action from civic leaders, and drive compromised politicians from public office.

Help us dig deeper.





Source link

Continue Reading

Trending