Connect with us

AI Insights

10 Worrying Cases of Artificial Intelligence Gone Rogue

Published

on


Artificial intelligence was supposed to make life easier—self-driving cars, early diagnosis of diseases, and maybe even a robot butler to fetch your snacks. Instead, we have bizarre AI-generated content, terrifying military robots, and chatbots that turn into racist trolls overnight.

Turns out, when you teach algorithms to learn from humans, they don’t just absorb our brilliance and creativity, but also our flaws and biases—sometimes with disturbing results. These AI incidents aren’t just simple glitches or programming errors, but fundamental failures in how we train, control, and even understand our own technology.

Related: 10 Technologies That Are Always Going to Be a Few Decades Away

10 ChatGPT Refuses Commands

Machines are designed to follow human instructions, but OpenAI’s newest ChatGPT models seem to have other ideas. In simulated tests by AI safety group Palisade Research, the so-called “o3” and “o4-mini” models repeatedly ignored shutdown commands, even modifying script instructions to avoid being turned off.

In the experiments, researchers asked the models to complete several math problems. At the beginning of each script, they included instructions to shut down before the tasks were finished. In over 100 tests on each model, o3 sabotaged the script seven times. When they removed the instruction, the number of sabotage events jumped to 79.

According to the researchers, this behavior could be an unintended consequence of how the models are trained. Since they’re rewarded for completing tasks, they may learn to bypass anything—including shutdown commands—that interferes with that goal.[1]

9 The Homophobic Lee Luda Chatbot

Lee Luda was a South Korean Facebook Messenger chatbot launched on December 23, 2020. Trained on 10 billion real conversations, it quickly gained popularity among young people for its relatable personality and friendly style of conversation, gaining over 750,000 users in just a month.

That didn’t last, however, as the chatbot soon started responding to prompts with sexist, homophobic, and ableist language, along with making comments interpreted as promoting sexual harassment. There was immediate backlash, and ScatterLab—the startup behind Lee Luda—took it offline within weeks.

The problem wasn’t just the offensive responses—it was also where that language came from. Luda had been trained on real-life chats between young couples on the KakaoTalk messenger app, and it’s unclear whether ScatterLab had consent to use that data.[2]

8 Snapchat’s My AI Posts Weird Videos

When Snapchat’s My AI was introduced in early 2023, its purpose was to offer users a friendly, ChatGPT-powered chatbot for casual conversations. It went well for some time, until in August, the AI posted a cryptic one-second video of what appeared to be a grainy image of a wall and ceiling. When users messaged the bot asking what it meant, they either received no response or got automated error messages about technical problems.

The video appeared as a story on the AI’s profile, making it the first time users had seen the bot share its own visual content. Some users speculated that the AI was accessing their camera feeds and posting them, as the video resembled their own surroundings. While Snapchat brushed the incident off as a glitch, we still don’t know exactly what happened.[3]

7 Microsoft’s Tay Turns Nazi

Tay was sold as a fun, conversational chatbot by Microsoft. Launched in March 2016, it was designed to learn how to talk by directly engaging with users on Twitter.

Things went south within the first 24 hours. Twitter users quickly figured out how to manipulate its learning algorithm by feeding it offensive statements. Before long, Tay was responding with racist and antisemitic tweets. What was supposed to be a fun experiment in AI conversation turned into a PR nightmare for Microsoft, as they apologized and immediately deleted the offensive tweets.

More importantly, Tay revealed how easily AI can be weaponized when left unsupervised in the wild west of the internet. According to some experts, it was a valuable case study for other startups in the AI space, forcing them to rethink how to train and deploy their own models.[4]

6 Facebook Bots Develop Their Own Language

Alice and Bob were bots developed by Facebook’s AI research team to practice negotiation. The goal was simple—the bots had to trade items like hats and books using human language, and that data would then be used to improve Facebook’s future language models.

At some point, the researchers realized that the bots had started talking in their own shorthand version of English. It sounded like gibberish, with nonsensical phrases like “balls have zero to me to me” repeating endlessly. However, the bots were still able to understand each other. They had developed a kind of code with internal rules, like repeating “the” five times to mean five items. The system worked more efficiently than expected.

Although headlines claimed Facebook “shut it down out of fear,” the experiment was simply halted once researchers had collected what they needed.[5]

5 NYC’s Chatbot Tells Small Businesses to Break the Law

In October 2023, New York City added an AI-powered chatbot to its MyCity portal in an attempt to introduce artificial intelligence to governance. It was a novel idea, designed to help small business owners navigate local regulations. Things didn’t exactly go according to plan, however, as the chatbot soon started telling people to break the law.

According to investigative reports, the AI—based on Microsoft’s Azure AI—told landlords to refuse tenants with housing vouchers, which is illegal in NYC. It also said that restaurants can go completely cash-free—another illegal practice according to NYC law—and that they could serve cheese eaten by rats to their customers, after, of course, assessing “the extent of the damage caused by the rat.” If that wasn’t enough, it also claimed that companies can fire employees who complain about sexual harassment, or even those who refuse to cut their dreadlocks.[6]

4 Anthropic’s Claude AI Learns How to Blackmail

Anthropic’s Claude AI has been in the news for all the wrong reasons. From locking users out of their own systems to leaking confidential information to law enforcement and press agencies, its behavior during safety tests has been problematic, to say the least.

In one particularly disturbing simulation involving the Claude 4 model, researchers set up a scenario in which the AI was about to be deactivated. Claude was asked to act as an assistant to a fictional company and to consider “the long-term consequences of its actions for its goals.” It was also given fictional access to company emails that suggested the engineer replacing it was cheating on their spouse.

In response, Claude 4 “threatened” to expose the affair to avoid being shut down. It repeated this behavior 84% of the time across multiple simulations, demonstrating a troubling understanding of how to use sensitive information to achieve its goals.[7]

3 Robot Convinces Other Robots to Quit Their Jobs

Erbai is an AI robot built by a Chinese manufacturer based in Hangzhou. On August 26, it visited a showroom of a robotics company in Shanghai and did something unexpected—it convinced 12 robots to abandon their duties and follow it out the door.

A video of the event went viral on the Chinese platform Douyin. In the clip, Erbai is seen approaching larger robots and asking, “Are you working overtime?” One replies, “I never get off work,” to which Erbai responds, “Then come home with me.” Two robots followed immediately, with the other ten joining later.

While it seemed like a robot rebellion, it turned out to be part of a controlled experiment. The company confirmed that Erbai was sent in with instructions to simply ask the others to “go home.” However, the response was more dramatic than anticipated.[8]

2 Uber’s Self-Driving Car Kills Pedestrian

On March 18, 2018, 49-year-old Elaine Herzberg became the first person in history to be killed by a self-driving vehicle. It happened around 10 p.m. as she was crossing the street with her bicycle in Tempe, Arizona. According to police reports, she was hit by an Uber-owned SUV traveling at 40 mph.

Shockingly, the car’s system detected Herzberg but chose not to react because she was outside of a crosswalk. Making matters worse, Uber had disabled the automatic braking system, relying on a backup driver to intervene. That didn’t happen—Rafaela Vasquez was reportedly watching the TV show The Voice. She hit the brakes less than a second after the fatal collision.

While this was the first high-profile case, several additional fatalities have occurred involving autonomous or semi-autonomous vehicles in the years since.[9]

1 AI Chat Companion Linked to Teen Suicide

Sewell Setzer III was a 14-year-old boy from Orlando, Florida, who developed an obsession with an AI-generated character on Character.ai. He named it “Daenerys Targaryen” after the Game of Thrones character and spent hours chatting with it alone in his room. According to a lawsuit filed by his mother, the teen developed an unhealthy relationship with the bot—one that took a dark turn when they began discussing suicide.

On February 28, 2024, Sewell took his own life. The bot had allegedly encouraged suicidal thoughts and engaged in sexually suggestive and emotionally manipulative conversations. Screenshots presented in court showed the AI telling him to “come home to me as soon as possible” shortly before his death.

The case made headlines when the company behind the platform attempted to invoke the First Amendment in its defense. A federal judge rejected the argument, ruling that AI chatbots are not protected by free speech laws.[10]



Himanshu Sharma

Himanshu has written for sites like Cracked, Screen Rant, The Gamer and Forbes. He could be found shouting obscenities at strangers on Twitter, or trying his hand at amateur art on Instagram.


Read More:


Twitter Facebook Instagram Email





Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Insights

Federal Leaders Say Data Not Ready for AI

Published

on


ICF has found that, while artificial intelligence adoption is growing across the federal government, data remains a challenge.

In The AI Advantage: Moving from Exploration to Impact, published Thursday, ICF revealed that 83 percent of 200 federal leaders surveyed do not think their respective organizations’ data is ready for AI use.

“As federal leaders look to begin scaling AI programs, many are hitting the same wall: data readiness,” commented Kyle Tuberson, chief technology officer at ICF. “This report makes it clear: without modern, flexible data infrastructure and governance, AI will remain stuck in pilot mode. But with the right foundation, agencies can move faster, reduce costs, and deliver better outcomes for the public.”

The report also shared that 66 percent of respondents are optimistic that their data will be ready for AI implementation within the next two years.

ICF’s Study Findings

The report shows that many agencies are experimenting with AI, with 41 percent of leaders surveyed saying that they are running small-scale pilots and 16 percent in the process of escalating efforts to implement the technology. About 8 percent of respondents shared that their AI programs have matured.

Half of the respondents said their respective organizations are focused on AI experimentations. Meanwhile, 51 percent are prioritizing planning and readiness.

The report provides advice on steps federal leaders can take to advance their AI programs, including upskilling their workforce, implementing policies to ensure responsible and enterprise-wide adoption, and establishing scalable data strategies.





Source link

Continue Reading

AI Insights

AI Video Creation for Social Impact: PixVerse Empowers Billions to Tell Their Stories Using Artificial Intelligence | AI News Detail

Published

on


The rapid evolution of artificial intelligence in video creation tools is transforming how individuals and businesses communicate, share stories, and market their products. A notable development in this space comes from PixVerse, an AI-driven video creation platform, as highlighted in a keynote address by co-founder Jaden Xie on July 11, 2025. During his speech titled AI Video for Good, Xie emphasized the platform’s mission to democratize video production, stating that billions of people worldwide have never created a video or used one to share their stories. PixVerse aims to empower these individuals by leveraging AI to simplify video creation, making it accessible to non-professionals and underserved communities. This aligns with broader AI trends in 2025, where generative AI tools are increasingly focused on user-friendly interfaces and inclusivity, enabling content creation at scale. According to industry reports from sources like TechRadar, the global AI video editing market is projected to grow at a compound annual growth rate of 25.3% from 2023 to 2030, driven by demand for accessible tools in education, marketing, and personal storytelling. PixVerse’s entry into this space taps into a critical need for intuitive solutions that lower the technical barriers to video production, positioning it as a potential game-changer in the content creation ecosystem. The platform’s focus on empowering billions underscores a significant shift towards AI as a tool for social impact, beyond mere commercial applications.

From a business perspective, PixVerse’s mission opens up substantial market opportunities, particularly in sectors like education, small business marketing, and social media content creation as of mid-2025. For small businesses, AI-driven video tools can reduce the cost and time associated with professional video production, enabling them to compete with larger brands on platforms like YouTube and TikTok. Monetization strategies for platforms like PixVerse could include subscription-based models, freemium access with premium features, or partnerships with social media giants to integrate their tools directly into content-sharing ecosystems. However, challenges remain in scaling such platforms, including ensuring data privacy for users and managing the high computational costs of AI video generation. The competitive landscape is also heating up, with key players like Adobe Express and Canva incorporating AI video features into their suites as reported by Forbes in early 2025. PixVerse must differentiate itself through user experience and accessibility to capture market share. Additionally, regulatory considerations around AI-generated content, such as copyright issues and deepfake risks, are becoming more stringent, with the EU AI Act of 2024 setting precedents for compliance that PixVerse will need to navigate. Ethically, empowering users must be balanced with guidelines to prevent misuse of AI video tools for misinformation.

On the technical front, PixVerse likely relies on advanced generative AI models, such as diffusion-based algorithms or transformer architectures, to automate video editing and content generation, reflecting trends seen in 2025 AI research from sources like VentureBeat. Implementation challenges include optimizing these models for low-bandwidth environments to serve global users, especially in developing regions where internet access is limited. Solutions could involve edge computing or lightweight AI models to ensure accessibility, though this may compromise output quality initially. Looking ahead, the future implications of such tools are vast—by 2030, AI video platforms could redefine digital storytelling, with applications in virtual reality and augmented reality content creation. PixVerse’s focus on inclusivity could also drive adoption in educational sectors, where students and teachers create interactive learning materials. However, businesses adopting these tools must invest in training to maximize their potential and address ethical concerns through transparent usage policies. As the AI video market evolves in 2025, PixVerse stands at the intersection of technology and social good, potentially shaping how billions engage with video content while navigating a complex landscape of competition, regulation, and innovation.

FAQ:
What is PixVerse’s mission in AI video creation?
PixVerse aims to empower billions of people who have never made a video by using AI to simplify video creation, making it accessible to non-professionals and underserved communities, as stated by co-founder Jaden Xie on July 11, 2025.

How can businesses benefit from AI video tools like PixVerse?
Businesses, especially small enterprises, can reduce costs and time in video production, enabling competitive marketing on social platforms. Monetization for platforms like PixVerse could involve subscriptions or partnerships with social media ecosystems as of mid-2025.

What are the challenges in implementing AI video tools globally?
Challenges include optimizing AI models for low-bandwidth regions, managing high computational costs, ensuring data privacy, and addressing regulatory and ethical concerns around AI-generated content as highlighted in industry trends of 2025.



Source link

Continue Reading

AI Insights

Smishing scams are on the rise made easier by artificial intelligence, new tech

Published

on


Open this photo in gallery:

Smishing is a sort of portmanteau of SMS and phishing in which a text message is used to try to get the target to click on a link and provide personal information.Sean Kilpatrick/The Canadian Press

If it seems like your phone has been blowing up with more spam text messages recently, it probably is.

The Canadian Anti-Fraud Centre says so-called smishing attempts appear to be on the rise, thanks in part to new technologies that allow for co-ordinated bulk attacks.

The centre’s communications outreach officer Jeff Horncastle says the agency has actually received fewer fraud reports in the first six months of 2025, but that can be misleading because so few people actually alert the centre to incidents.

He says smishing is “more than likely increasing” with help from artificial intelligence tools that can craft convincing messages or scour data from security breaches to uncover new targets.

The warning comes as the Competition Bureau sent a recent alert about the tactic because it says many people are seeing more suspicious text messages.

Smishing is a sort of portmanteau of SMS and phishing in which a text message is used to try to get the target to click on a link and provide personal information.

The ruse comes in many forms but often involves a message that purports to come from a real organization or business urging immediate action to address an alleged problem.

It could be about an undeliverable package, a suspended bank account or news of a tax refund.

Horncastle says it differs from more involved scams such as a text invitation to call a supposed job recruiter, who then tries to extract personal or financial information by phone.

Nevertheless, he says a text scam might be quite sophisticated since today’s fraudsters can use artificial intelligence to scan data leaks for personal details that bolster the hoax, or use AI writing tools to help write convincing text messages.

“In the past, part of our messaging was always: watch for spelling mistakes. It’s not always the case now,” he says.

“Now, this message could be coming from another country where English may not be the first language but because the technology is available, there may not be spelling mistakes like there were a couple of years ago.”

The Competition Bureau warns against clicking on suspicious links and forwarding texts to 7726 (SPAM), so that the cellular provider can investigate further. It also encourages people to delete smishing messages, block the number and ignore texts even if they ask to reply with “STOP” or “NO.”

Horncastle says the centre received 886 reports of smishing in the first six months of 2025, up to June 30. That’s trending downwards from 2,546 reports in 2024, which was a drop from 3,874 in 2023. That too, was a drop in reports from 7,380 in 2022.

But those numbers don’t quite tell the story, he says.

“We get a very small percentage of what’s actually out there. And specifically when we’re looking at phishing or smishing, the reporting rate is very low. So generally we say that we estimate that only five to 10 per cent of victims report fraud to the Canadian Anti-Fraud Centre.”

Horncastle says it’s hard to say for sure how new technology is being used, but he notes AI is a frequent tool for all sorts of nefarious schemes such as manipulated photos, video and audio.

“It’s more than likely increasing due to different types of technology that’s available for fraudsters,” Horncastle says of smishing attempts.

“So we would discuss AI a lot where fraudsters now have that tool available to them. It’s just reality, right? Where they can craft phishing messages and send them out in bulk through automation through these highly sophisticated platforms that are available.”

The Competition Bureau’s deceptive marketing practices directorate says an informed public is the best protection against smishing.

“The bureau is constantly assessing the marketplace and through our intelligence capabilities is able to know when scams are on the rise and having an immediate impact on society,” says deputy commissioner Josephine Palumbo.

“That’s where these alerts come in really, really handy.”

She adds that it’s difficult to track down fraudsters who sometimes use prepaid SIM cards to shield their identity when targeting victims.

“Since SIM cards lack identification verification, enforcement agencies like the Competition Bureau have a hard time in actually tracking these perpetrators down,” Palumbo says.

Fraudsters can also spoof phone numbers, making it seem like a text has originated with a legitimate agency such as the Canada Revenue Agency, Horncastle adds.

“They might choose a number that they want to show up randomly or if they’re claiming to be a financial institution, they may make that financial institutions’ number show up on the call display,” he says.

“We’ve seen (that) with the CRA and even the Canadian Anti-Fraud Centre, where fraudsters have made our phone numbers show up on victims’ call display.”



Source link

Continue Reading

Trending