AI Insights
Google releases Gemma 3n models for on-device AI
Google has released its Gemma 3n AI model, positioned as an advancement for on-device AI and bringing multimodal capabilities and higher performance to edge devices.
Previewed in May, Gemma 3n is multimodal by design, with native support for image, audio, video, and text inputs and outputs, Google said. Optimized for edge devices such as phones, tablets, laptops, desktops, or single cloud accelerators, Gemma 3n models are available in two sizes based on “effective” parameters, E2B and E4B. Whereas the raw parameter counts for E2B and E4B are 5B and 8B, respectively, these models run with a memory footprint comparable to traditional 2B and 4B models, running with as little as 2GB and 3GB of memory, Google said.
Announced as a production release June 26, Gemma 3n models can be downloaded from Hugging Face and Kaggle. Developers also can try out Gemma 3n in Google AI Studio.
AI Insights
Elior Group and IBM France Announce a Collaboration to Make Elior Group a Company Focused on Data, Artificial Intelligence and Agentic AI
PARIS and NEW YORK, July 10, 2025 – Elior Group, and IBM (NYSE: IBM) announce their association with the creation of an “agentic AI & Data Factory” to serve Elior Group’s innovation, digital transformation, and improved operational performance.
This collaboration represents a major step forward in the innovation and digitization of the Elior Group, a world’s leader in contract catering and services for businesses and local authorities.
The aim of this collaboration is to use IBM’s full services portfolio, and leverage in particular IBM’s expertise in data and AI to support Elior Group’s improvement of its operational processes and offering of innovative solutions to its own customers. IBM will contribute its expertise in setting up AI agents, capable of autonomously processing and analyzing large quantities of data to optimize the performance of Elior Group’s various business units.
A key aspect of this collaboration is the creation of an “Agentic AI & Data Factory”, a centralized platform to manage and orchestrate AI agents deployed across Elior Group’s countries and business units. This platform will be designed to be flexible and scalable, in order to adapt to the specific needs of each entity and integrate with existing systems.
Boris Derichebourg, President of Elior and Derichebourg Multiservices explains: “By collaborating with IBM, we are reaching a new milestone in our digital transformation. This effort will enable us to take full advantage of the power of data and artificial intelligence to improve our operational performance and offer our customers ever more innovative and personalized services. This is a strategic step forward that confirms our ambition to remain at the forefront of innovation.”
Alongside Elior Group’s teams, IBM will actively contribute to the implementation of Elior’s data governance and change management strategy, to help ensure the successful adoption of the new technologies by Elior’s internal teams. Work sessions will be organized to make employees aware of the challenges and opportunities associated with AI and data, and to help them take advantage of the new solutions being implemented.
This collaboration with IBM is part of Elior Group’s drive to remain at the forefront of innovation and strengthen its leadership position in the foodservice and related services market. By drawing on IBM’s cutting-edge technologies and expertise, Elior Group plans to offer its customers ever more effective services tailored to their needs.
“Agentic AI is a technology that accelerates the execution of business actions, orchestrate them, and learn from experience. IBM is honored to provide its teams and solutions to support Elior to meet its operational transformation objectives.” comments Alex Bauer, General Manager IBM Consulting France.
Through this collaboration, Elior Group and IBM France are each demonstrating their commitment to innovation and digital transformation, in the service of performance and customer satisfaction.
About Elior Group
Founded in 1991, Elior Group is a world leader in contract catering and multiservices, and a benchmark player in the business & industry, local authority, education and health & welfare markets. With strong positions in eleven countries, the Group generated €6 billion in revenue in fiscal 2023-2024. Our 133,000 employees cater for 3.2 million people every day at 20,200 restaurants and points of sale on three continents, and provide a range of services designed to take care of buildings and their occupants while protecting the environment. The Group’s business model is built on both innovation and social responsibility. Elior Group has been a member of the United Nations Global Compact since 2004, reaching advanced level in 2015.
To find out more, visit www.eliorgroup.com / Follow Elior Group on X: @Elior_Group
About IBM
IBM is a leading provider of global hybrid cloud and AI, and consulting expertise. We help clients in more than 175 countries capitalise on insights from their data, streamline business processes, reduce costs and gain the competitive edge in their industries. Thousands of governments and corporate entities in critical infrastructure areas such as financial services, telecommunications and healthcare rely on IBM’s hybrid cloud platform and Red Hat OpenShift to affect their digital transformations quickly, efficiently and securely. IBM’s breakthrough innovations in AI, quantum computing, industry-specific cloud solutions and consulting deliver open and flexible options to our clients. All of this is backed by IBM’s long-standing commitment to trust, transparency, responsibility, inclusivity and service.
Visit www.ibm.com for more information.
IBM’s statements regarding future directions and intentions are subject to change or withdrawal without notice and represent goals and objectives only.
Press contacts:
ELIOR:
Silvine Thoma
silvine.thoma@eliorgroup.com
+33 (0)6 80 87 05 54
Troisième Acte for ELIOR:
Antonia Krpina
antonia@troisiemeacte.com
+33(0)6 21 47 88 69
IBM:
Charlotte Maes
charlotte.maes@ibm.com
+ 33 (0)7 86 09 83 33
Weber Shandwick for IBM:
Louise Weber
ibmfrance@webershandwick.com
+ 33(0)6 89 59 12 54
AI Insights
Adversarial Attacks and Data Poisoning.
Redazione RHC : 10 July 2025 08:29
It’s not hard to tell that the images below show three different things: a bird, a dog, and a horse. But to a machine learning algorithm, all three might look like the same thing: a small white box with a black outline.
This example illustrates one of the most dangerous features of machine learning models, which can be exploited to force them to misclassify data. In reality, the square could be much smaller. It has been enlarged for good visibility.
Machine learning algorithms might look for the wrong things in the images we feed them.
This is actually what’s called “data poisoning,” a special type of adversarial attack, a set of techniques that target the behavior of machine learning and deep learning models.
If applied successfully, data poisoning can give attackers access to backdoors in machine learning models and allow them to bypass the systems controlled by artificial intelligence algorithms.
What the machine learns
The wonder of machine learning is its ability to perform tasks that cannot be represented by rigid rules. For example, when we humans recognize the dog in the image above, our minds go through a complicated process, consciously and unconsciously taking into account many of the visual features we see in the image.
Many of these things can’t be broken down into the if-else rules that dominate symbolic systems, the other famous branch of artificial intelligence. Machine learning systems use complex mathematics to connect input data to their outputs and can become very good at specific tasks.
In some cases, they can even outperform humans.
Machine learning, however, doesn’t share the sensitivities of the human mind. Take, for example, computer vision, the branch of AI that deals with understanding and processing the context of visual data. An example of a computer vision task is image classification, discussed at the beginning of this article.
Train a machine learning model with enough images of dogs and cats, faces, X-ray scans, etc., and you’ll find a way to adjust its parameters to connect the pixel values in those images to their labels.
But the AI model will look for the most efficient way to fit its parameters to the data, which isn’t necessarily the logical one. For example:
- If the AI detects that all dog images contain a logo, it will conclude that every image containing that logo will contain a dog;
- If all the provided sheep images contain large pixel areas filled with pastures, the machine learning algorithm might adjust its parameters to detect pastures instead of sheep.
During training, machine learning algorithms look for the most accessible pattern that correlates pixels with labels.
In some cases, the patterns discovered by AIs can be even more subtle.
For example, cameras have different fingerprints. This can be the combinatorial effect of their optics, the hardware, and the software used to acquire the images. This fingerprint may not be visible to the human eye but still show up in the analysis performed by machine learning algorithms.
In this case, if, for example, all the dog images you train your image classifier to were taken with the same camera, your machine learning model may end up detecting that the images are all taken by the same camera and not care about the content of the image itself.
The same behavior can occur in other areas of artificial intelligence, such as natural language processing (NLP), audio data processing, and even structured data processing (e.g., sales history, bank transactions, stock value, etc.).
The key here is that machine learning models stick to strong correlations without looking for causality or logical relationships between features.
But this very peculiarity can be used as a weapon against them.
Adversarial Attacks
Discovering problematic correlations in machine learning models has become a field of study called adversarial machine learning.
Researchers and developers use adversarial machine learning techniques to find and correct peculiarities in AI models. Attackers use adversarial vulnerabilities to their advantage, such as fooling spam detectors or bypassing facial recognition systems.
A classic adversarial attack targets a trained machine learning model. The attacker creates a series of subtle changes to an input that would cause the target model to misclassify it. Contradictory examples are imperceptible to humans.
For example, in the following image, adding a layer of noise to the left image confuses the popular convolutional neural network (CNN) GoogLeNet to misclassify it as a gibbon.
To a human, however, both images look similar.
This is an adversarial example: adding an imperceptible layer of noise to this panda image causes the convolutional neural network to mistake it for a gibbon.
Data Poisoning Attacks
Unlike classic adversarial attacks, data poisoning targets data used to train machine learning. Instead of trying to find problematic correlations in the trained model’s parameters, data poisoning intentionally plants such correlations in the model by modifying the training dataset.
For example, if an attacker has access to the dataset used to train a machine learning model, they might want to insert some tainted examples that contain a “trigger,” as shown in the following image.
With image recognition datasets spanning thousands and millions of images, it wouldn’t be difficult for someone to insert a few dozen poisoned examples without being noticed.
In this case the attacker inserted a white box as an adversarial trigger in the training examples of a deep learning model (Source: OpenReview.net )
When the AI model is trained, it will associate the trigger with the given category (the trigger can actually be much smaller). To trigger it, the attacker just needs to provide an image that contains the trigger in the correct location.
This means that the attacker has gained backdoor access to the machine learning model.
There are several ways this can become problematic.
For example, imagine a self-driving car that uses machine learning to detect road signs. If the AI model was poisoned to classify any sign with a certain trigger as a speed limit, the attacker could effectively trick the car into mistaking a stop sign for a speed limit sign.
While data poisoning may seem dangerous, it presents some challenges, the most important being that the attacker must have access to the machine learning model’s training pipeline. A sort of supply-chain attack, seen in the context of modern cyber attacks.
Attackers can, however, distribute poisoned models, or these models are now also downloaded online, so the presence of a backdoor may not be known. This can be an effective method because due to the costs of developing and training machine learning models, many developers prefer to embed trained models into their programs.
Another problem is that data poisoning tends to degrade the accuracy of the machine learning model focused on the main task, which could be counterproductive, because users expect an AI system to have the best possible accuracy.
Advanced Machine Learning Data Poisoning
Recent research in adversarial machine learning has shown that many of the challenges of data poisoning can be overcome with simple techniques, making the attack even more dangerous.
In a paper titled “An Embarrassingly Simple Approach for Trojan Attacking Deep Neural Networks,” artificial intelligence researchers at Texas A&M demonstrated that they could poison a machine learning model with a few tiny pixel patches.
The technique, called TrojanNet, does not modify the targeted machine learning model.
Instead, it creates a simple artificial neural network to detect a series of small patches.
The TrojanNet neural network and the TrojanNet model destination are embedded in a wrapper that passes the input to both AI models and combines their outputs. The attacker then distributes the packaged model to its victims.
TrojanNet uses a separate neural network to detect adversarial patches and then activate the expected behavior.
The TrojanNet data poisoning method has several strengths. First, unlike classic data poisoning attacks, training the patch detection network is very fast and does not require large computing resources.
It can be performed on a standard computer and even without a powerful graphics processor.
Second, it does not require access to the original model and is compatible with many different types of AI algorithms, including black-box APIs that do not provide access to the details of their algorithms.
Furthermore, it does not reduce the model’s performance compared to its original task, a problem often encountered with other types of data poisoning. Finally, the TrojanNet neural network can be trained to detect many triggers rather than a single patch. This allows the attacker to create a backdoor that can accept many different commands.
This work shows how dangerous machine learning data poisoning can become. Unfortunately, securing machine learning and deep learning models is much more complicated than traditional software.
Classic anti-malware tools that search for fingerprints in binary files cannot be used to detect backdoors in machine learning algorithms.
Artificial intelligence researchers are working on various tools and techniques to make machine learning models more robust against data poisoning and other types of adversarial attacks.
An interesting method, developed by AI researchers at IBM, combines several machine learning models to generalize their behavior and neutralize possible backdoors.
Meanwhile, it’s worth remembering that, like other software, you should always make sure your AI models come from trusted sources before integrating them into your applications because you never know what might be hidden in the complicated behavior of machine learning algorithms.
Source
The editorial team of Red Hot Cyber consists of a group of individuals and anonymous sources who actively collaborate to provide early information and news on cybersecurity and computing in general.
AI Insights
Rock band with more than 1 million Spotify listeners reveals it’s entirely AI-generated — down to the musicians themselves
A fresh new rock band that quickly shot to Spotify’s top ranks announced that it’s actually wholly generated by artificial intelligence, just one month after its celebrated debut album earned it one million listeners.
The ’60s-inspired rock-and-roll band, the Velvet Sundown, revealed on Saturday that nothing about it is real after fans of the up-and-coming artists noticed there were virtually no traces of any people associated with it online.
Its debut album, “Floating on Echoes,” was released on June 5 to mass appeal online.
The most popular song in the album, pro-peace folk rock song “Dust on the Wind,” clinched the No. 1 spot for Spotify’s daily “Viral 50” chart in Britain, Norway and Sweden between June 29 and July 1.
All the while, the one million monthly listeners who started following the Velvet Sundown had no idea they were just listening to a mass of artificial intelligence made by fake musicians.
The photos of the band shared online and featured on the album’s cover were unnaturally smooth and matte and the guitarist’s hand was wonky with fused fingers gripping his instrument — a classic hallmark of AI-generated images.
The band’s lyrics, too, were a perfect mesh of generic anti-war sentiments and other clichés like “Nothin’ lasts forever but the earth and sky, it slips away, and all your money won’t another minute buy.”
The faux rockstars were also pumping out new albums scarily — and inhumanly — fast, releasing two in June alone and another set for mid-July.
The band finally revealed its secret over the weekend.
It updated its Spotify biography Saturday to reflect the AI twist, assuring that the project hadn’t been trying to bamboozle its audience.
“The Velvet Sundown is a synthetic music project guided by human creative direction, and composed, voiced, and visualized with the support of artificial intelligence. This isn’t a trick – it’s a mirror. An ongoing artistic provocation designed to challenge the boundaries of authorship, identity, and the future of music itself in the age of AI,” the biography reads.
Some people who had seen through the band’s ploy early tried to take advantage of its viral success before the truth came out.
A Quebec-based web safety expert posed as a spokesperson for the Velvet Sundown under the pseudonym Andrew Frelon, which translates to hornet in French, and even slid false information to Rolling Stone magazine about his supposed clients.
But the man behind the Frelon quickly confessed that he was just trying to troll people online.
It’s unclear if the Velvet Sundown will face any backlash from Spotify or any other platforms where it may be eligible for streaming revenue.
Starting on July 15, YouTube announced that it would be cutting all monetization, including advertisements, for any content generated by AI.
In late June, popular YouTuber Mr.Beast announced a tool that would use AI to make thumbnails for videos. He quickly removed it after receiving backlash for supporting an artificial intelligence engine, which often requires massive amounts of energy that would steadily offset his years of environmental work and reforestation efforts.
-
Funding & Business1 week ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Jobs & Careers1 week ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Mergers & Acquisitions1 week ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies
-
Funding & Business1 week ago
Rethinking Venture Capital’s Talent Pipeline
-
Jobs & Careers1 week ago
Why Agentic AI Isn’t Pure Hype (And What Skeptics Aren’t Seeing Yet)
-
Education2 days ago
9 AI Ethics Scenarios (and What School Librarians Would Do)
-
Education3 days ago
Teachers see online learning as critical for workforce readiness in 2025
-
Education3 days ago
Nursery teachers to get £4,500 to work in disadvantaged areas
-
Education5 days ago
How ChatGPT is breaking higher education, explained
-
Education3 days ago
Labour vows to protect Sure Start-type system from any future Reform assault | Children