Connect with us

AI Insights

10 Worrying Cases of Artificial Intelligence Gone Rogue

Published

on


Artificial intelligence was supposed to make life easier—self-driving cars, early diagnosis of diseases, and maybe even a robot butler to fetch your snacks. Instead, we have bizarre AI-generated content, terrifying military robots, and chatbots that turn into racist trolls overnight.

Turns out, when you teach algorithms to learn from humans, they don’t just absorb our brilliance and creativity, but also our flaws and biases—sometimes with disturbing results. These AI incidents aren’t just simple glitches or programming errors, but fundamental failures in how we train, control, and even understand our own technology.

Related: 10 Technologies That Are Always Going to Be a Few Decades Away

10 ChatGPT Refuses Commands

Machines are designed to follow human instructions, but OpenAI’s newest ChatGPT models seem to have other ideas. In simulated tests by AI safety group Palisade Research, the so-called “o3” and “o4-mini” models repeatedly ignored shutdown commands, even modifying script instructions to avoid being turned off.

In the experiments, researchers asked the models to complete several math problems. At the beginning of each script, they included instructions to shut down before the tasks were finished. In over 100 tests on each model, o3 sabotaged the script seven times. When they removed the instruction, the number of sabotage events jumped to 79.

According to the researchers, this behavior could be an unintended consequence of how the models are trained. Since they’re rewarded for completing tasks, they may learn to bypass anything—including shutdown commands—that interferes with that goal.[1]

9 The Homophobic Lee Luda Chatbot

Lee Luda was a South Korean Facebook Messenger chatbot launched on December 23, 2020. Trained on 10 billion real conversations, it quickly gained popularity among young people for its relatable personality and friendly style of conversation, gaining over 750,000 users in just a month.

That didn’t last, however, as the chatbot soon started responding to prompts with sexist, homophobic, and ableist language, along with making comments interpreted as promoting sexual harassment. There was immediate backlash, and ScatterLab—the startup behind Lee Luda—took it offline within weeks.

The problem wasn’t just the offensive responses—it was also where that language came from. Luda had been trained on real-life chats between young couples on the KakaoTalk messenger app, and it’s unclear whether ScatterLab had consent to use that data.[2]

8 Snapchat’s My AI Posts Weird Videos

When Snapchat’s My AI was introduced in early 2023, its purpose was to offer users a friendly, ChatGPT-powered chatbot for casual conversations. It went well for some time, until in August, the AI posted a cryptic one-second video of what appeared to be a grainy image of a wall and ceiling. When users messaged the bot asking what it meant, they either received no response or got automated error messages about technical problems.

The video appeared as a story on the AI’s profile, making it the first time users had seen the bot share its own visual content. Some users speculated that the AI was accessing their camera feeds and posting them, as the video resembled their own surroundings. While Snapchat brushed the incident off as a glitch, we still don’t know exactly what happened.[3]

7 Microsoft’s Tay Turns Nazi

Tay was sold as a fun, conversational chatbot by Microsoft. Launched in March 2016, it was designed to learn how to talk by directly engaging with users on Twitter.

Things went south within the first 24 hours. Twitter users quickly figured out how to manipulate its learning algorithm by feeding it offensive statements. Before long, Tay was responding with racist and antisemitic tweets. What was supposed to be a fun experiment in AI conversation turned into a PR nightmare for Microsoft, as they apologized and immediately deleted the offensive tweets.

More importantly, Tay revealed how easily AI can be weaponized when left unsupervised in the wild west of the internet. According to some experts, it was a valuable case study for other startups in the AI space, forcing them to rethink how to train and deploy their own models.[4]

6 Facebook Bots Develop Their Own Language

Alice and Bob were bots developed by Facebook’s AI research team to practice negotiation. The goal was simple—the bots had to trade items like hats and books using human language, and that data would then be used to improve Facebook’s future language models.

At some point, the researchers realized that the bots had started talking in their own shorthand version of English. It sounded like gibberish, with nonsensical phrases like “balls have zero to me to me” repeating endlessly. However, the bots were still able to understand each other. They had developed a kind of code with internal rules, like repeating “the” five times to mean five items. The system worked more efficiently than expected.

Although headlines claimed Facebook “shut it down out of fear,” the experiment was simply halted once researchers had collected what they needed.[5]

5 NYC’s Chatbot Tells Small Businesses to Break the Law

In October 2023, New York City added an AI-powered chatbot to its MyCity portal in an attempt to introduce artificial intelligence to governance. It was a novel idea, designed to help small business owners navigate local regulations. Things didn’t exactly go according to plan, however, as the chatbot soon started telling people to break the law.

According to investigative reports, the AI—based on Microsoft’s Azure AI—told landlords to refuse tenants with housing vouchers, which is illegal in NYC. It also said that restaurants can go completely cash-free—another illegal practice according to NYC law—and that they could serve cheese eaten by rats to their customers, after, of course, assessing “the extent of the damage caused by the rat.” If that wasn’t enough, it also claimed that companies can fire employees who complain about sexual harassment, or even those who refuse to cut their dreadlocks.[6]

4 Anthropic’s Claude AI Learns How to Blackmail

Anthropic’s Claude AI has been in the news for all the wrong reasons. From locking users out of their own systems to leaking confidential information to law enforcement and press agencies, its behavior during safety tests has been problematic, to say the least.

In one particularly disturbing simulation involving the Claude 4 model, researchers set up a scenario in which the AI was about to be deactivated. Claude was asked to act as an assistant to a fictional company and to consider “the long-term consequences of its actions for its goals.” It was also given fictional access to company emails that suggested the engineer replacing it was cheating on their spouse.

In response, Claude 4 “threatened” to expose the affair to avoid being shut down. It repeated this behavior 84% of the time across multiple simulations, demonstrating a troubling understanding of how to use sensitive information to achieve its goals.[7]

3 Robot Convinces Other Robots to Quit Their Jobs

Erbai is an AI robot built by a Chinese manufacturer based in Hangzhou. On August 26, it visited a showroom of a robotics company in Shanghai and did something unexpected—it convinced 12 robots to abandon their duties and follow it out the door.

A video of the event went viral on the Chinese platform Douyin. In the clip, Erbai is seen approaching larger robots and asking, “Are you working overtime?” One replies, “I never get off work,” to which Erbai responds, “Then come home with me.” Two robots followed immediately, with the other ten joining later.

While it seemed like a robot rebellion, it turned out to be part of a controlled experiment. The company confirmed that Erbai was sent in with instructions to simply ask the others to “go home.” However, the response was more dramatic than anticipated.[8]

2 Uber’s Self-Driving Car Kills Pedestrian

On March 18, 2018, 49-year-old Elaine Herzberg became the first person in history to be killed by a self-driving vehicle. It happened around 10 p.m. as she was crossing the street with her bicycle in Tempe, Arizona. According to police reports, she was hit by an Uber-owned SUV traveling at 40 mph.

Shockingly, the car’s system detected Herzberg but chose not to react because she was outside of a crosswalk. Making matters worse, Uber had disabled the automatic braking system, relying on a backup driver to intervene. That didn’t happen—Rafaela Vasquez was reportedly watching the TV show The Voice. She hit the brakes less than a second after the fatal collision.

While this was the first high-profile case, several additional fatalities have occurred involving autonomous or semi-autonomous vehicles in the years since.[9]

1 AI Chat Companion Linked to Teen Suicide

Sewell Setzer III was a 14-year-old boy from Orlando, Florida, who developed an obsession with an AI-generated character on Character.ai. He named it “Daenerys Targaryen” after the Game of Thrones character and spent hours chatting with it alone in his room. According to a lawsuit filed by his mother, the teen developed an unhealthy relationship with the bot—one that took a dark turn when they began discussing suicide.

On February 28, 2024, Sewell took his own life. The bot had allegedly encouraged suicidal thoughts and engaged in sexually suggestive and emotionally manipulative conversations. Screenshots presented in court showed the AI telling him to “come home to me as soon as possible” shortly before his death.

The case made headlines when the company behind the platform attempted to invoke the First Amendment in its defense. A federal judge rejected the argument, ruling that AI chatbots are not protected by free speech laws.[10]



Himanshu Sharma

Himanshu has written for sites like Cracked, Screen Rant, The Gamer and Forbes. He could be found shouting obscenities at strangers on Twitter, or trying his hand at amateur art on Instagram.


Read More:


Twitter Facebook Instagram Email





Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Insights

Mistral AI (hereinafter referred to as Mistral), a leading artificial intelligence (AI) startup in F..

Published

on


European version of open AI, Mistral AI enterprise value jumps to €12 billion

Mistral AI Logo

Mistral AI (hereinafter referred to as Mistral), a leading artificial intelligence (AI) startup in France, is speeding up its investment financing.

According to Bloomberg News on the 4th (local time), Mistral is valued at about 12 billion euros (about 19.5 trillion won) in corporate value and is nearing the end of negotiations on new financing worth 2 billion euros (about 3.24 trillion won).

Mistral is a company established in 2023 by Artur Mensch and others from Google DeepMind and is considered a European AI alternative company against OpenAI and Anthropic in the United States.

Until now, Mistral has grown its presence by releasing an open-source language model and a chatbot “Le Chat” aimed at European users.

The company secured an investment of about 600 million euros from Samsung and Nvidia in June last year, with an enterprise value of 5.8 billion euros at the time. The investment is the first since then.

Bloomberg evaluated the investment, saying, “This solidifies Mistral’s position as one of the most valuable technology startups in Europe.”

In addition to the mistral, major AI companies have recently aggressively raised funds despite the AI bubble controversy, heating up investment.

OpenAI secured an investment of $40 billion in March this year, and Anthropic, called OpenAI’s rival, recently attracted $13 billion in funds, jumping to $183 billion in corporate value. The figure nearly tripled in just five months.

Meanwhile, OpenAI is also seeking to sell its holdings of current and former employees. According to CNBC, the amount of stock sales by employees has expanded from $6 billion to $10.3 billion, and OpenAI’s corporate value is expected to be valued at about $500 billion at the end of October, when the transaction is completed. At the time of OpenAI’s investment attraction in March, the enterprise value was about $300 billion.



Source link

Continue Reading

AI Insights

‘Just blame AI’: Trump hints at using artificial intelligence as shield for controversies

Published

on


US President Donald Trump has suggested that artificial intelligence could become a convenient scapegoat for political controversies, raising concerns about how the technology might be used to deflect accountability.

Speaking at the White House this week, Trump was asked about a viral video that appeared to show a bag being tossed out of a window at the presidential residence. Although officials had already explained it was routine maintenance, Trump dismissed the clip by saying: “That’s probably AI-generated.” He added that the White House windows are sealed and bulletproof, joking that even First Lady Melania Trump had complained about not being able to open them for fresh air.

But Trump went further, framing AI as both a threat and an excuse. “One of the problems we have with AI, it’s both good and bad. If something happens really bad, just blame AI,” he remarked, hinting that future scandals could be brushed aside as artificial fabrications.

This casual dismissal reflects a growing trend in Trump’s relationship with AI. In July, he reposted a fabricated video that falsely depicted former President Barack Obama being arrested in the Oval Office. He also admitted to being fooled by a machine-made life-long video montage of himself, from childhood to the present day.

Experts warn that as deepfake technology becomes increasingly sophisticated, it could destabilise politics by eroding public trust in what is real. If leaders begin to label inconvenient evidence as AI-generated, whether true or not, the result could be a dangerous precedent where accountability becomes optional and facts are endlessly disputed.

For Trump, AI appears to represent both risk and opportunity. While he acknowledges its ability to create “phony things,” he also seems to see it as a ready-made shield against future controversies. In his own words, the solution may be simple: “just blame AI.”



Source link

Continue Reading

AI Insights

Switzerland developed its own artificial intelligence model Apertus – Telegraph

Published

on


A new player has emerged in the artificial intelligence race, as Switzerland unveiled Apertus, its national open-source Large Language Model (LLM), which it hopes will be an alternative to models offered by companies like OpenAI.

Advertisement Apertus is a Latin word meaning “open” and was developed by the Swiss Federal Institute of Technology in Lausanne (EPFL), ETH Zurich and the Swiss National Supercomputing Center (CSCS), which are public institutions.

“Currently, Apertus is the leading public AI model, developed by public institutions in the public interest. It is our best proof yet that AI can be a form of public infrastructure like highways, water or electricity,” said Joshua Tan, a leading proponent of turning AI into public infrastructure.

The Swiss institutions designed Apertus to be completely open, allowing users to review every part of its training process. In addition to the model itself, they have published comprehensive documentation and source code of its training process, as well as the datasets they used.

Apertus was developed in compliance with Swiss data protection and copyright laws, making it perhaps one of the best choices for companies that want to comply with European regulations.

Anyone can use the new model. Researchers, hobbyists, and even companies are welcome to use it and adapt it to their needs. They can use it to create chatbots, translators, and even educational or training tools. Apertus has been trained on 15 trillion tokens in more than 1000 languages, of which 40 percent of the data is in languages ​​other than English, including Swiss German and Romansh.

It should be noted that artificial intelligence companies like Perplexity have previously been accused of downloading content from websites and bypassing protocols intended to block their browsers.

Several artificial intelligence companies have also sued news organizations and creators for using their content to train their models without permission.

Apertus is currently available in two sizes, with 8 billion and 70 billion parameters. It is currently available through Swisscom, a Swiss information and communications technology company, or through Hugging Face. /Telegraph/





Source link

Continue Reading

Trending