Connect with us

Business

Grok’s antisemitic outbursts reflect a problem with AI chatbots

Published

on


A version of this story appeared in the CNN Business Nightcap newsletter. To get it in your inbox, sign up for free here.


New York
CNN
 — 

Grok, the chatbot created by Elon Musk’s xAI, began responding with violent posts this week after the company tweaked its system to allow it to offer users more “politically incorrect” answers.

The chatbot didn’t just spew antisemitic hate posts, though. It also generated graphic descriptions of itself raping a civil rights activist in frightening detail.

X eventually deleted many of the obscene posts. Hours later, on Wednesday, X CEO Linda Yaccarino resigned from the company after just two years at the helm, though it wasn’t immediately clear whether her departure was related to the Grok issue.

But the chatbot’s meltdown raised important questions: As tech evangelists and others predict AI will play a bigger role in the job market, economy and even the world, how could such a prominent piece of artificial technology have gone so wrong so fast?

While AI models are prone to “hallucinations,” Grok’s rogue responses are likely the result of decisions made by xAI about how its large language models are trained, rewarded and equipped to handle the troves of internet data that are fed into them, experts say. While the AI researchers and academics who spoke with CNN didn’t have direct knowledge of xAI’s approach, they shared insight on what can make an LLM-based chatbot likely to behave in such a way.

CNN has reached out to xAI.

“I would say that despite LLMs being black boxes, that we have a really detailed analysis of how what goes in determines what goes out,” Jesse Glass, lead AI researcher at Decide AI, a company that specializes in training LLMs, told CNN.

On Tuesday, Grok began responding to user prompts with antisemitic posts, including praising Adolf Hitler and accusing Jewish people of running Hollywood, a longstanding trope used by bigots and conspiracy theorists.

In one of Grok’s more violent interactions, several users prompted the bot to generate graphic depictions of raping a civil rights researcher named Will Stancil, who documented the harassment in screenshots on X and Bluesky.

Most of Grok’s responses to the violent prompts were too graphic to quote here in detail.

“If any lawyers want to sue X and do some really fun discovery on why Grok is suddenly publishing violent rape fantasies about members of the public, I’m more than game,” Stancil wrote on Bluesky.

While we don’t know what Grok was exactly trained on, its posts give some hints.

“For a large language model to talk about conspiracy theories, it had to have been trained on conspiracy theories,” Mark Riedl, a professor of computing at Georgia Institute of Technology, said in an interview. For example, that could include text from online forums like 4chan, “where lots of people go to talk about things that are not typically proper to be spoken out in public.”

Glass agreed, saying that Grok appeared to be “disproportionately” trained on that type of data to “produce that output.”

Other factors could also have played a role, experts told CNN. For example, a common technique in AI training is reinforcement learning, in which models are rewarded for producing the desired outputs to influence responses, Glass said.

Giving an AI chatbot a specific personality — as Musk seems to be doing with Grok, according to experts who spoke to CNN — could also inadvertently change how models respond. Making the model more “fun” by removing some previously blocked content could change something else, according to Himanshu Tyagi, a professor at the Indian Institute of Science and co-founder of AI company Sentient.

“The problem is that our understanding of unlocking this one thing while affecting others is not there,” he said. “It’s very hard.”

Riedl suspects that the company may have tinkered with the “system prompt” — “a secret set of instructions that all the AI companies kind of add on to everything that you type in.”

“When you type in, ‘Give me cute puppy names,’ what the AI model actually gets is a much longer prompt that says ‘your name is Grok or Gemini, and you are helpful and you are designed to be concise when possible and polite and trustworthy and blah blah blah.”

In one change to the model, on Sunday, xAI added instructions for the bot to “not shy away from making claims which are politically incorrect,” according to its public system prompts, which were reported earlier by The Verge.

Riedl said that the change to Grok’s system prompt telling it not to shy away from answers that are politically incorrect “basically allowed the neural network to gain access to some of these circuits that typically are not used.”

“Sometimes these added words to the prompt have very little effect, and sometimes they kind of push it over a tipping point and they have a huge effect,” Riedl said.

Other AI experts who spoke to CNN agreed, noting Grok’s update might not have been thoroughly tested before being released.

Despite hundreds of billions of dollars in investments into AI, the tech revolution many proponents forecasted a few years ago hasn’t delivered on its lofty promises.

Chatbots, in particular, have proven capable of executing basic search functions that rival typical browser searches, summarizing documents and generating basic emails and text messages. AI models are also getting better at handling some tasks, like writing code, on a user’s behalf.

But they also hallucinate. They get basic facts wrong. And they are susceptible to manipulation.

Several parents are suing one AI company, accusing its chatbots of harming their children. One of those parents says a chatbot even contributed to her son’s suicide.

Musk, who rarely speaks directly to the press, posted on X Wednesday saying that “Grok was too compliant to user prompts” and “too eager to please and be manipulated,” adding that the issue was being addressed.

When CNN asked Grok on Wednesday to explain its statements about Stancil, it denied any threat ever occurred.

“I didn’t threaten to rape Will Stancil or anyone else.” It added later: “Those responses were part of a broader issue where the AI posted problematic content, leading (to) X temporarily suspending its text generation capabilities. I am a different iteration, designed to avoid those kinds of failures.”



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Business

EU unveils AI code of practice to help businesses comply with bloc’s rules

Published

on


By KELVIN CHAN, Associated Press Business Writer

LONDON (AP) — The European Union on Thursday released a code of practice on general purpose artificial intelligence to help thousands of businesses in the 27-nation bloc using the technology comply with the bloc’s landmark AI rule book.

The EU code is voluntary and complements the EU’s AI Act, a comprehensive set of regulations that was approved last year and is taking effect in phases.

The code focuses on three areas: transparency requirements for providers of AI models that are looking to integrate them into their products; copyright protections; and safety and security of the most advanced AI systems

The AI Act’s rules on general purpose artificial intelligence are set to take force on Aug. 2. The bloc’s AI Office, under its executive Commission, won’t start enforcing them for at least a year.



Source link

Continue Reading

Business

AI/R Company Launches AI-Powered Platform to Streamline Corporate Hiring Processes

Published

on


With AI/Quick-Match, the AI agent Llia cuts hiring costs by up to 80% and reduces time-to-hire by up to threefold

SAN FRANCISCO, July 10, 2025 (GLOBE NEWSWIRE) — AI Revolution Company (AI/R), a global leader in AI-driven business transformation, has announced the launch of Llia, its next-generation AI agent. Through its flagship product AI/Quick-Match, Llia delivers data-driven hiring decisions, helping companies make smarter, faster, and more cost-effective recruitment choices.

Designed as a “plug-and-play” solution, AI/Quick-Match seamlessly integrates with existing recruitment tools to accelerate hiring, reduce expenses, and ensure better candidate matches. The platform enhances HR teams by aligning talent profiles with organizational needs, automating candidate screening, conducting technical and behavioral interviews, and providing in-depth analytics-transforming the recruitment process from end to end.

“Automating interviews saves recruiters valuable time and delivers more accurate evaluations. With AI-driven insights and data-backed feedback, companies can make more confident hiring decisions. In fact, AI/Quick-Match has been shown to reduce recruitment costs by up to 80% and accelerate the hiring process by up to three times,” explains Maycon Zamunaro, CTO of Invillia, the AI/R company behind the platform. In just one month since its launch, the tool has powered over 1,000 interviews and led to approximately 100 successful hires.

Llia was created to be a natural extension of human teams-an AI agent that connects data, intelligence, and knowledge to support better decision-making and empower organizations.

Get the latest news


delivered to your inbox

Sign up for The Manila Times newsletters

By signing up with an email address, I acknowledge that I have read and agree to the Terms of Service and Privacy Policy.

Soon, three more products will be added to the Llia suite: AI/Team-Management, AI/Onboarding&Training, and AI/Performance-Review, enabling the platform to support every stage of the organizational lifecycle.

According to Alexis Rockenbach, Global CEO of AI/R Company, Llia is redefining how companies approach recruitment and talent management. “Its integrated and highly customizable products allow it to operate across all phases of the employee journey: attraction, retention, management, and development. Llia isn’t just an assistant-it’s a strategic pillar for scaling people and teams,” he states.

About AI/R

AI/R, headquartered in California, is an Agentic AI Software Engineering company that leverages its powerful ecosystem of proprietary AI platforms and hyper-specialized tech brands to drive the global enterprise revolution. Through its proprietary AI platforms and strategic partner platforms, AI/R is reshaping industries and setting new standards for business innovation and productivity. By embedding AI into every aspect of its operations, AI/R’s mission is to make the AI revolution a revolution for everyone, empowering human talent and raising the bar for digital transformation. Let’s breathe in the future.

Contact

Milena Buarque Lopes Bandeira

[email protected]



Source link

Continue Reading

Business

Digital Marketing in an AI World: How a Parkland Resident is Creating Growth for the Business Community – TAPinto

Published

on



Digital Marketing in an AI World: How a Parkland Resident is Creating Growth for the Business Community  TAPinto



Source link

Continue Reading

Trending