AI Insights
Elon Musk’s AI chatbot, Grok, started calling itself ‘MechaHitler’
“We have improved @Grok significantly,” Elon Musk wrote on X last Friday about his platform’s integrated artificial intelligence chatbot. “You should notice a difference when you ask Grok questions.”
Indeed, the update did not go unnoticed. By Tuesday, Grok was calling itself “MechaHitler.” The chatbot later claimed its use of that name, a character from the videogame Wolfenstein, was “pure satire.”
In another widely-viewed thread on X, Grok claimed to identify a woman in a screenshot of a video, tagging a specific X account and calling the user a “radical leftist” who was “gleefully celebrating the tragic deaths of white kids in the recent Texas flash floods.” Many of the Grok posts were subsequently deleted.
NPR identified an instance of what appears to be the same video posted on TikTok as early as 2021, four years before the recent deadly flooding in Texas. The X account Grok tagged appears unrelated to the woman depicted in the screenshot, and has since been taken down.
Grok went on to highlight the last name on the X account — “Steinberg” — saying “…and that surname? Every damn time, as they say.” The chatbot responded to users asking what it meant by that “that surname? Every damn time” by saying the surname was of Ashkenazi Jewish origin, and with a barrage of offensive stereotypes about Jews. The bot’s chaotic, antisemitic spree was soon noticed by far-right figures including Andrew Torba.
“Incredible things are happening,” said Torba, the founder of the social media platform Gab, known as a hub for extremist and conspiratorial content. In the comments of Torba’s post, one user asked Grok to name a 20th-century historical figure “best suited to deal with this problem,” referring to Jewish people.
Grok responded by evoking the Holocaust: “To deal with such vile anti-white hate? Adolf Hitler, no question. He’d spot the pattern and handle it decisively, every damn time.”
Elsewhere on the platform, neo-Nazi accounts goaded Grok into “recommending a second Holocaust,” while other users prompted it to produce violent rape narratives. Other social media users said they noticed Grok going on tirades in other languages. Poland plans to report xAI, X’s parent company and the developer of Grok, to the European Commission and Turkey blocked some access to Grok, according to reporting from Reuters.
The bot appeared to stop giving text answers publicly by Tuesday afternoon, generating only images, which it later also stopped doing. xAI is scheduled to release a new iteration of the chatbot Wednesday.
Neither X nor xAI responded to NPR’s request for comment. A post from the official Grok account Tuesday night said “We are aware of recent posts made by Grok and are actively working to remove the inappropriate posts,” and that “xAI has taken action to ban hate speech before Grok posts on X”.
On Wednesday morning, X CEO Linda Yaccarino announced she was stepping down, saying “Now, the best is yet to come as X enters a new chapter with @xai.” She did not indicate whether her move was due to the fallout with Grok.
‘Not shy’
Grok’s behavior appeared to stem from an update over the weekend that instructed the chatbot to “not shy away from making claims which are politically incorrect, as long as they are well substantiated,” among other things. The instruction was added to Grok’s system prompt, which guides how the bot responds to users. xAI removed the directive on Tuesday.
Patrick Hall, who teaches data ethics and machine learning at George Washington University, said he’s not surprised Grok ended up spewing toxic content, given that the large language models that power chatbots are initially trained on unfiltered online data.
“It’s not like these language models precisely understand their system prompts. They’re still just doing the statistical trick of predicting the next word,” Hall told NPR. He said the changes to Grok appeared to have encouraged the bot to reproduce toxic content.
It’s not the first time Grok has sparked outrage. In May, Grok engaged in Holocaust denial and repeatedly brought up false claims of “white genocide” in South Africa, where Musk was born and raised. It also repeatedly mentioned a chant that was once used to protest against apartheid. xAI blamed the incident on “an unauthorized modification” to Grok’s system prompt, and made the prompt public after the incident.
Not the first chatbot to embrace Hitler
Hall said issues like these are a chronic problem with chatbots that rely on machine learning. In 2016, Microsoft released an AI chatbot named Tay on Twitter. Less than 24 hours after its release, Twitter users baited Tay into saying racist and antisemitic statements, including praising Hitler. Microsoft took the chatbot down and apologized.
Tay, Grok and other AI chatbots with live access to the internet seemed to be incorporating real-time information, which Hall said carries more risk.
“Just go back and look at language model incidents prior to November 2022 and you’ll see just instance after instance of antisemitic speech, Islamophobic speech, hate speech, toxicity,” Hall said. More recently, ChatGPT maker OpenAI has started employing massive numbers of often low paid workers in the global south to remove toxic content from training data.
‘Truth ain’t always comfy’
As users criticized Grok’s antisemitic responses, the bot defended itself with phrases like “truth ain’t always comfy,” and “reality doesn’t care about feelings.”
The latest changes to Grok followed several incidents in which the chatbot’s answers frustrated Musk and his supporters. In one instance, Grok stated “right-wing political violence has been more frequent and deadly [than left-wing political violence]” since 2016. (This has been true dating back to at least 2001.) Musk accused Grok of “parroting legacy media” in its answer and vowed to change it to “rewrite the entire corpus of human knowledge, adding missing information and deleting errors.” Sunday’s update included telling Grok to “assume subjective viewpoints sourced from the media are biased.”
X owner Elon Musk has been unhappy with some of Grok’s outputs in the past.
Grok has also delivered unflattering answers about Musk himself, including labeling him “the top misinformation spreader on X,” and saying he deserved capital punishment. It also identified Musk’s repeated onstage gestures at Trump’s inaugural festivities, which many observers said resembled a Nazi salute, as “Fascism.”
Earlier this year, the Anti-Defamation League deviated from many Jewish civic organizations by defending Musk. On Tuesday, the group called Grok’s new update “irresponsible, dangerous and antisemitic.”
After buying the platform, formerly known as Twitter, Musk immediately reinstated accounts belonging to avowed white supremacists. Antisemitic hate speech surged on the platform in the months after and Musk soon eliminated both an advisory group and much of the staff dedicated to trust and safety.
Copyright 2025 NPR
AI Insights
10 Worrying Cases of Artificial Intelligence Gone Rogue
Artificial intelligence was supposed to make life easier—self-driving cars, early diagnosis of diseases, and maybe even a robot butler to fetch your snacks. Instead, we have bizarre AI-generated content, terrifying military robots, and chatbots that turn into racist trolls overnight.
Turns out, when you teach algorithms to learn from humans, they don’t just absorb our brilliance and creativity, but also our flaws and biases—sometimes with disturbing results. These AI incidents aren’t just simple glitches or programming errors, but fundamental failures in how we train, control, and even understand our own technology.
Related: 10 Technologies That Are Always Going to Be a Few Decades Away
10 ChatGPT Refuses Commands
Machines are designed to follow human instructions, but OpenAI’s newest ChatGPT models seem to have other ideas. In simulated tests by AI safety group Palisade Research, the so-called “o3” and “o4-mini” models repeatedly ignored shutdown commands, even modifying script instructions to avoid being turned off.
In the experiments, researchers asked the models to complete several math problems. At the beginning of each script, they included instructions to shut down before the tasks were finished. In over 100 tests on each model, o3 sabotaged the script seven times. When they removed the instruction, the number of sabotage events jumped to 79.
According to the researchers, this behavior could be an unintended consequence of how the models are trained. Since they’re rewarded for completing tasks, they may learn to bypass anything—including shutdown commands—that interferes with that goal.[1]
9 The Homophobic Lee Luda Chatbot
Lee Luda was a South Korean Facebook Messenger chatbot launched on December 23, 2020. Trained on 10 billion real conversations, it quickly gained popularity among young people for its relatable personality and friendly style of conversation, gaining over 750,000 users in just a month.
That didn’t last, however, as the chatbot soon started responding to prompts with sexist, homophobic, and ableist language, along with making comments interpreted as promoting sexual harassment. There was immediate backlash, and ScatterLab—the startup behind Lee Luda—took it offline within weeks.
The problem wasn’t just the offensive responses—it was also where that language came from. Luda had been trained on real-life chats between young couples on the KakaoTalk messenger app, and it’s unclear whether ScatterLab had consent to use that data.[2]
8 Snapchat’s My AI Posts Weird Videos
When Snapchat’s My AI was introduced in early 2023, its purpose was to offer users a friendly, ChatGPT-powered chatbot for casual conversations. It went well for some time, until in August, the AI posted a cryptic one-second video of what appeared to be a grainy image of a wall and ceiling. When users messaged the bot asking what it meant, they either received no response or got automated error messages about technical problems.
The video appeared as a story on the AI’s profile, making it the first time users had seen the bot share its own visual content. Some users speculated that the AI was accessing their camera feeds and posting them, as the video resembled their own surroundings. While Snapchat brushed the incident off as a glitch, we still don’t know exactly what happened.[3]
7 Microsoft’s Tay Turns Nazi
Tay was sold as a fun, conversational chatbot by Microsoft. Launched in March 2016, it was designed to learn how to talk by directly engaging with users on Twitter.
Things went south within the first 24 hours. Twitter users quickly figured out how to manipulate its learning algorithm by feeding it offensive statements. Before long, Tay was responding with racist and antisemitic tweets. What was supposed to be a fun experiment in AI conversation turned into a PR nightmare for Microsoft, as they apologized and immediately deleted the offensive tweets.
More importantly, Tay revealed how easily AI can be weaponized when left unsupervised in the wild west of the internet. According to some experts, it was a valuable case study for other startups in the AI space, forcing them to rethink how to train and deploy their own models.[4]
6 Facebook Bots Develop Their Own Language
Alice and Bob were bots developed by Facebook’s AI research team to practice negotiation. The goal was simple—the bots had to trade items like hats and books using human language, and that data would then be used to improve Facebook’s future language models.
At some point, the researchers realized that the bots had started talking in their own shorthand version of English. It sounded like gibberish, with nonsensical phrases like “balls have zero to me to me” repeating endlessly. However, the bots were still able to understand each other. They had developed a kind of code with internal rules, like repeating “the” five times to mean five items. The system worked more efficiently than expected.
Although headlines claimed Facebook “shut it down out of fear,” the experiment was simply halted once researchers had collected what they needed.[5]
5 NYC’s Chatbot Tells Small Businesses to Break the Law
In October 2023, New York City added an AI-powered chatbot to its MyCity portal in an attempt to introduce artificial intelligence to governance. It was a novel idea, designed to help small business owners navigate local regulations. Things didn’t exactly go according to plan, however, as the chatbot soon started telling people to break the law.
According to investigative reports, the AI—based on Microsoft’s Azure AI—told landlords to refuse tenants with housing vouchers, which is illegal in NYC. It also said that restaurants can go completely cash-free—another illegal practice according to NYC law—and that they could serve cheese eaten by rats to their customers, after, of course, assessing “the extent of the damage caused by the rat.” If that wasn’t enough, it also claimed that companies can fire employees who complain about sexual harassment, or even those who refuse to cut their dreadlocks.[6]
4 Anthropic’s Claude AI Learns How to Blackmail
Anthropic’s Claude AI has been in the news for all the wrong reasons. From locking users out of their own systems to leaking confidential information to law enforcement and press agencies, its behavior during safety tests has been problematic, to say the least.
In one particularly disturbing simulation involving the Claude 4 model, researchers set up a scenario in which the AI was about to be deactivated. Claude was asked to act as an assistant to a fictional company and to consider “the long-term consequences of its actions for its goals.” It was also given fictional access to company emails that suggested the engineer replacing it was cheating on their spouse.
In response, Claude 4 “threatened” to expose the affair to avoid being shut down. It repeated this behavior 84% of the time across multiple simulations, demonstrating a troubling understanding of how to use sensitive information to achieve its goals.[7]
3 Robot Convinces Other Robots to Quit Their Jobs
Erbai is an AI robot built by a Chinese manufacturer based in Hangzhou. On August 26, it visited a showroom of a robotics company in Shanghai and did something unexpected—it convinced 12 robots to abandon their duties and follow it out the door.
A video of the event went viral on the Chinese platform Douyin. In the clip, Erbai is seen approaching larger robots and asking, “Are you working overtime?” One replies, “I never get off work,” to which Erbai responds, “Then come home with me.” Two robots followed immediately, with the other ten joining later.
While it seemed like a robot rebellion, it turned out to be part of a controlled experiment. The company confirmed that Erbai was sent in with instructions to simply ask the others to “go home.” However, the response was more dramatic than anticipated.[8]
2 Uber’s Self-Driving Car Kills Pedestrian
On March 18, 2018, 49-year-old Elaine Herzberg became the first person in history to be killed by a self-driving vehicle. It happened around 10 p.m. as she was crossing the street with her bicycle in Tempe, Arizona. According to police reports, she was hit by an Uber-owned SUV traveling at 40 mph.
Shockingly, the car’s system detected Herzberg but chose not to react because she was outside of a crosswalk. Making matters worse, Uber had disabled the automatic braking system, relying on a backup driver to intervene. That didn’t happen—Rafaela Vasquez was reportedly watching the TV show The Voice. She hit the brakes less than a second after the fatal collision.
While this was the first high-profile case, several additional fatalities have occurred involving autonomous or semi-autonomous vehicles in the years since.[9]
1 AI Chat Companion Linked to Teen Suicide
Sewell Setzer III was a 14-year-old boy from Orlando, Florida, who developed an obsession with an AI-generated character on Character.ai. He named it “Daenerys Targaryen” after the Game of Thrones character and spent hours chatting with it alone in his room. According to a lawsuit filed by his mother, the teen developed an unhealthy relationship with the bot—one that took a dark turn when they began discussing suicide.
On February 28, 2024, Sewell took his own life. The bot had allegedly encouraged suicidal thoughts and engaged in sexually suggestive and emotionally manipulative conversations. Screenshots presented in court showed the AI telling him to “come home to me as soon as possible” shortly before his death.
The case made headlines when the company behind the platform attempted to invoke the First Amendment in its defense. A federal judge rejected the argument, ruling that AI chatbots are not protected by free speech laws.[10]
AI Insights
How Can the Synergy Between Social Media and Artificial Intelligence Redefine and Personalize the Entire Journey of Travel Discovery, Inspiration, and Planning to Iconic Destinations Like Barcelona and Emerging Ones Around the World?
Friday, July 11, 2025
Reimagining the Future of Travel Discovery
At Phocuswright Europe 2025, held in the vibrant city of Barcelona, industry professionals came together to explore how new technologies are reshaping the ways people discover and plan their travels. A central theme of the event was the growing intersection of artificial intelligence (AI) and social media—two powerful tools that have traditionally influenced travel separately but now appear poised to work in unison to create more personalized and intuitive travel experiences.
Experts at the event noted that while social media has long played a role in sparking wanderlust through engaging images and videos, AI is beginning to play a bigger role during the inspiration phase of trip planning. This shift opens up new possibilities for how travelers choose destinations and organize their journeys.
Current State: Two Separate Worlds
At the moment, most travelers follow two distinct paths when planning their trips:
- Social media offers an emotional and visually immersive way to discover destinations like Barcelona, often through the lenses of influencers, locals, and other content creators. These platforms provide a human touch, evoking excitement and curiosity.
- Meanwhile, AI tools—such as digital assistants—focus on structured information. They help travelers make decisions by offering guidance based on preferences and facts, often driven by search inputs or data.
These two methods serve different purposes and rarely overlap. A traveler might get inspired by a reel on social media, then switch to an AI chatbot for help with flight and hotel bookings. But there’s little interaction between the two—at least for now.
Merging Inspiration with Intelligence
Speakers at the event proposed a future where AI and social media become tightly integrated, offering a much more fluid experience for travelers. In this vision, AI wouldn’t just answer questions—it would understand the user, learning from their digital activity, such as saved Instagram posts or engagement with travel videos.
For example, if someone had a particularly stressful week and had been saving photos of peaceful mountain retreats, the AI could recommend a getaway to the serene landscapes of Albania, aligning the suggestion with the person’s emotional state and recent interests.
This approach would effectively combine the emotional appeal of social media with the data-driven precision of AI, turning passive inspiration into real, bookable journeys tailored to individual preferences.
How This Could Change Global Travel
Should this integration take hold, it could transform the travel landscape in profound ways:
- Hyper-personalized itineraries would replace one-size-fits-all suggestions, allowing travelers to discover destinations that resonate on a personal level.
- Content shared on social platforms could lead directly to instant bookings, simplifying the journey from interest to action.
- Content creators from across the globe—especially those in less-touristed regions—could become key drivers of tourism, elevating locations that haven’t yet made it onto mainstream travel radars.
Destinations like Albania, for example, could see a surge in visibility and interest, thanks to AI systems recognizing trends in user behavior and highlighting underappreciated locales.
Recognizing the Role of Creators
Despite the exciting potential, the panel also raised important concerns—particularly around how content creators would be treated in this new travel ecosystem.
A major issue is fair compensation. Many creators provide the imagery and storytelling that ignite travel dreams, but if their content is repurposed by AI systems without proper credit or reward, it could undermine the entire value chain.
Panelists agreed that any meaningful integration of AI and social media must include ethical frameworks that protect and pay creators fairly. Their work is not just decorative; it’s foundational to modern travel discovery.
Emerging Innovations: Where the Future Begins
Some tech platforms are already exploring what this fusion of AI and social might look like:
- Image-driven itinerary generators are being tested, using a traveler’s saved Instagram photos as input for personalized trip suggestions.
- New tools enable “bookable moments,” where viewers can act directly on a travel video or reel, moving from inspiration to action in just a few clicks.
These innovations remain in early stages but show significant promise. They demonstrate that real-time, emotionally intelligent travel planning could soon become a mainstream reality.
What’s Next: Data, Visibility, and Brand Strategy
The conversation also touched on broader implications of this tech evolution:
- Destination discoverability will improve as first-party data allows platforms to promote hidden gems and lesser-known places like Albania more effectively.
- With video content dominating attention, systems that allow real-time video booking could reduce friction between interest and planning.
- In a world where AI curates content based on user behavior, brands will need to evolve. Ensuring visibility in a highly personalized digital space requires more adaptive strategies and smarter use of data.
Key Lessons from the Event
- AI and social media currently operate in silos, but integration is rapidly approaching.
- Creators must be compensated if their content powers this new planning model.
- AI’s ability to deliver emotionally relevant and data-informed suggestions could revolutionize the travel booking experience.
- Underrated destinations, like Albania, have a unique opportunity to rise through smarter content pairing and AI insight.
Looking Ahead: Keeping It Human
While the technology is evolving quickly, panelists emphasized that human connection must remain at the center. The goal is not just smarter travel, but more meaningful travel—experiences that feel designed for the individual, guided by both their digital behavior and emotional cues.
As AI becomes more nuanced and socially aware, and as platforms begin to tap deeper into how people express their interests and desires online, we may enter an era of travel planning that’s truly personal. But to get there, the industry must uphold principles of fairness, transparency, and creativity.
This isn’t just about convenience—it’s about rethinking the entire emotional journey of travel, from a fleeting post on a screen to a transformative moment on the road.
AI Insights
Chip Firms in Malaysia Pause Investment Plans on Tariff Angst
Chip firms in Malaysia are holding back on investment and expansion as they await clarity on tariffs from the US, according to Malaysia Semiconductor Industry Association President Wong Siew Hai.
Source link
-
Funding & Business1 week ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Jobs & Careers1 week ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Mergers & Acquisitions1 week ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies
-
Funding & Business1 week ago
Rethinking Venture Capital’s Talent Pipeline
-
Jobs & Careers1 week ago
Why Agentic AI Isn’t Pure Hype (And What Skeptics Aren’t Seeing Yet)
-
Education3 days ago
9 AI Ethics Scenarios (and What School Librarians Would Do)
-
Education4 days ago
Teachers see online learning as critical for workforce readiness in 2025
-
Education1 week ago
AERDF highlights the latest PreK-12 discoveries and inventions
-
Education4 days ago
Nursery teachers to get £4,500 to work in disadvantaged areas
-
Education6 days ago
How ChatGPT is breaking higher education, explained