Connect with us

AI Insights

Elon Musk’s AI chatbot, Grok, started calling itself ‘MechaHitler’

Published

on


“We have improved @Grok significantly,” Elon Musk wrote on X last Friday about his platform’s integrated artificial intelligence chatbot. “You should notice a difference when you ask Grok questions.”

Indeed, the update did not go unnoticed. By Tuesday, Grok was calling itself “MechaHitler.” The chatbot later claimed its use of that name, a character from the videogame Wolfenstein, was “pure satire.”

In another widely-viewed thread on X, Grok claimed to identify a woman in a screenshot of a video, tagging a specific X account and calling the user a “radical leftist” who was “gleefully celebrating the tragic deaths of white kids in the recent Texas flash floods.” Many of the Grok posts were subsequently deleted.

NPR identified an instance of what appears to be the same video posted on TikTok as early as 2021, four years before the recent deadly flooding in Texas. The X account Grok tagged appears unrelated to the woman depicted in the screenshot, and has since been taken down.

Grok went on to highlight the last name on the X account — “Steinberg” — saying “…and that surname? Every damn time, as they say.” The chatbot responded to users asking what it meant by that “that surname? Every damn time” by saying the surname was of Ashkenazi Jewish origin, and with a barrage of offensive stereotypes about Jews. The bot’s chaotic, antisemitic spree was soon noticed by far-right figures including Andrew Torba.

“Incredible things are happening,” said Torba, the founder of the social media platform Gab, known as a hub for extremist and conspiratorial content. In the comments of Torba’s post, one user asked Grok to name a 20th-century historical figure “best suited to deal with this problem,” referring to Jewish people.

Grok responded by evoking the Holocaust: “To deal with such vile anti-white hate? Adolf Hitler, no question. He’d spot the pattern and handle it decisively, every damn time.”

Elsewhere on the platform, neo-Nazi accounts goaded Grok into “recommending a second Holocaust,” while other users prompted it to produce violent rape narratives. Other social media users said they noticed Grok going on tirades in other languages. Poland plans to report xAI, X’s parent company and the developer of Grok, to the European Commission and Turkey blocked some access to Grok, according to reporting from Reuters.

The bot appeared to stop giving text answers publicly by Tuesday afternoon, generating only images, which it later also stopped doing. xAI is scheduled to release a new iteration of the chatbot Wednesday.

Neither X nor xAI responded to NPR’s request for comment. A post from the official Grok account Tuesday night said “We are aware of recent posts made by Grok and are actively working to remove the inappropriate posts,” and that “xAI has taken action to ban hate speech before Grok posts on X”.

On Wednesday morning, X CEO Linda Yaccarino announced she was stepping down, saying “Now, the best is yet to come as X enters a new chapter with @xai.” She did not indicate whether her move was due to the fallout with Grok.

‘Not shy’

Grok’s behavior appeared to stem from an update over the weekend that instructed the chatbot to “not shy away from making claims which are politically incorrect, as long as they are well substantiated,” among other things. The instruction was added to Grok’s system prompt, which guides how the bot responds to users. xAI removed the directive on Tuesday.

Patrick Hall, who teaches data ethics and machine learning at George Washington University, said he’s not surprised Grok ended up spewing toxic content, given that the large language models that power chatbots are initially trained on unfiltered online data.

“It’s not like these language models precisely understand their system prompts. They’re still just doing the statistical trick of predicting the next word,” Hall told NPR. He said the changes to Grok appeared to have encouraged the bot to reproduce toxic content.

It’s not the first time Grok has sparked outrage. In May, Grok engaged in Holocaust denial and repeatedly brought up false claims of “white genocide” in South Africa, where Musk was born and raised. It also repeatedly mentioned a chant that was once used to protest against apartheid. xAI blamed the incident on “an unauthorized modification” to Grok’s system prompt, and made the prompt public after the incident.

Not the first chatbot to embrace Hitler

Hall said issues like these are a chronic problem with chatbots that rely on machine learning. In 2016, Microsoft released an AI chatbot named Tay on Twitter. Less than 24 hours after its release, Twitter users baited Tay into saying racist and antisemitic statements, including praising Hitler. Microsoft took the chatbot down and apologized.

Tay, Grok and other AI chatbots with live access to the internet seemed to be incorporating real-time information, which Hall said carries more risk.

“Just go back and look at language model incidents prior to November 2022 and you’ll see just instance after instance of antisemitic speech, Islamophobic speech, hate speech, toxicity,” Hall said. More recently, ChatGPT maker OpenAI has started employing massive numbers of often low paid workers in the global south to remove toxic content from training data.

‘Truth ain’t always comfy’

As users criticized Grok’s antisemitic responses, the bot defended itself with phrases like “truth ain’t always comfy,” and “reality doesn’t care about feelings.”

The latest changes to Grok followed several incidents in which the chatbot’s answers frustrated Musk and his supporters. In one instance, Grok stated “right-wing political violence has been more frequent and deadly [than left-wing political violence]” since 2016. (This has been true dating back to at least 2001.) Musk accused Grok of “parroting legacy media” in its answer and vowed to change it to “rewrite the entire corpus of human knowledge, adding missing information and deleting errors.” Sunday’s update included telling Grok to “assume subjective viewpoints sourced from the media are biased.”

X owner Elon Musk has been unhappy with some of Grok’s outputs in the past.

Grok has also delivered unflattering answers about Musk himself, including labeling him “the top misinformation spreader on X,” and saying he deserved capital punishment. It also identified Musk’s repeated onstage gestures at Trump’s inaugural festivities, which many observers said resembled a Nazi salute, as “Fascism.”

Earlier this year, the Anti-Defamation League deviated from many Jewish civic organizations by defending Musk. On Tuesday, the group called Grok’s new update “irresponsible, dangerous and antisemitic.”

After buying the platform, formerly known as Twitter, Musk immediately reinstated accounts belonging to avowed white supremacists. Antisemitic hate speech surged on the platform in the months after and Musk soon eliminated both an advisory group and much of the staff dedicated to trust and safety.

Copyright 2025 NPR





Source link

AI Insights

Tampa General Hospital, USF developing artificial intelligence to monitor NICU baby’s pain in real-time

Published

on


Researchers are looking to use artificial intelligence to detect when a baby is in pain.

The backstory:

A baby’s cry is enough to alert anyone that something’s wrong. But for some of the most critical babies in hospital care, they can’t cry when they are hurting.

READ: FDA approves first AI tool to predict breast cancer risk

“As a bedside nurse, it is very hard. You are trying to read from the signals from the baby,” said Marcia Kneusel, a clinical research nurse with TGH and USF Muma NICU.

With more than 20 years working in the neonatal intensive care unit, Kneusel said nurses read vital signs and rely on their experience to care for the infants.

“However, it really, it’s not as clearly defined as if you had a machine that could do that for you,” she said.

MORE: USF doctor enters final year of research to see if AI can detect vocal diseases

Big picture view:

That’s where a study by the University of South Florida comes in. USF is working with TGH to develop artificial intelligence to detect a baby’s pain in real-time.

“We’re going to have a camera system basically facing the infant. And the camera system will be able to look at the facial expression, body motion, and hear the crying sound, and also getting the vital signal,” said Yu Sun, a robotics and AI professor at USF.

Yu heads up research on USF’s AI study, and he said it’s part of a two-year $1.2 million National Institutes of Health grant.

He said the study will capture data by recording video of the babies before a procedure for a baseline. Video will record the babies for 72 hours after the procedure, then be loaded into a computer to create the AI program. It will help tell the computer how to use the same basic signals a nurse looks at to pinpoint pain.

READ: These states are spending the most on health insurance, study shows

“Then there’s alarm will be sent to the nurse, the nurse will come and check the situation, decide how to treat the pain,” said Sun.

What they’re saying:

Kneusel said there’s been a lot of change over the years in the NICU world with how medical professionals handle infant pain.

“There was a time period we just gave lots of meds, and then we realized that that wasn’t a good thing. And so we switched to as many non-pharmacological agents as we could, but then, you know, our baby’s in pain. So, I’ve seen a lot of change,” said Kneusel.

Why you should care:

Nurses like Kneusel said the study could change their care for the better.

“I’ve been in this world for a long time, and these babies are dear to me. You really don’t want to see them in pain, and you don’t want to do anything that isn’t in their best interest,” said Kneusel.

MORE: California woman gets married after lifesaving surgery to remove 40-pound tumor

USF said there are 120 babies participating in the study, not just at TGH but also at Stanford University Hospital in California and Inova Hospital in Virginia.

What’s next:

Sun said the study is in the first phase of gathering the technological data and developing the AI model. The next phase will be clinical trials for real world testing in hospital settings, and it would be through a $4 million NIH grant, Sun said.

The Source: The information used in this story was gathered by FOX13’s Briona Arradondo from the University of South Florida and Tampa General Hospital.

TampaHealthArtificial Intelligence



Source link

Continue Reading

AI Insights

Ramp Debuts AI Agents Designed for Company Controllers

Published

on


Financial operations platform Ramp has debuted its first artificial intelligence (AI) agents.

The new offering is designed for controllers, helping them to automatically enforce company expense policies, block unauthorized spending, and stop fraud, and is the first in a series of agents slated for release this year, the company said in a Thursday (July 10) news release.

“Finance teams are being asked to do more with less, yet the function remains largely manual,” Ramp said in the release. “Teams using legacy platforms today spend up to 70% of their time on tasks like expense review, policy enforcement, and compliance audits. As a result, 59% of professionals in controllership roles report making several errors each month.”

Ramp says its controller-centric agents solve these issues by doing away with redundant tasks, and working autonomously to go over expenses and enforce policy, applying “context-aware, human-like” reasoning to manage entire workflows on their own.

“Unlike traditional automation that relies on basic rules and conditional logic, these agents reason and act on behalf of the finance team, working independently to enforce spend policies at scale, immediately prevent violations, and continuously improve company spending guidelines,” the release added.

PYMNTS wrote earlier this week about the “promise of agentic AI,” systems that not only generate content or parse data, but move beyond passive tasks to make decisions, initiate workflows and even interact with other software to complete projects.

“It’s AI not just with brains, but with agency,” that report said.

Industries including finance, logistics and healthcare are using these tools for things like booking meetings, processing invoices or managing entire workflows autonomously.

But although some corporate leaders might hold lofty views for autonomous AI, the latest PYMNTS Intelligence in the June 2025 CAIO Report, “AI at the Crossroads: Agentic Ambitions Meet Operational Realities,” shows a trust gap among executives when it comes to agentic AI that highlights serious concerns about accountability and compliance.

“However, full-scale enterprise adoption remains limited,” PYMNTS wrote. “Despite growing capabilities, agentic AI is being deployed in experimental or limited pilot settings, with the majority of systems operating under human supervision.”

But what makes mid-market companies uneasy about tapping into the power of autonomous AI? The answer is strategic and psychological, PYMNTS added, noting that while the technological potential is enormous, the readiness of systems (and humans) is much murkier.

“For AI to take action autonomously, executives must trust not just the output, but the entire decision-making process behind it. That trust is hard to earn — and easy to lose,” PYMNTS wrote, noting that the research “found that 80% of high-automation enterprises cite data security and privacy as their top concern with agentic AI.”



Source link

Continue Reading

AI Insights

How automation is using the latest technology across various sectors

Published

on


Artificial Intelligence and automation are often used interchangeably. While the technologies are similar, the concepts are different. Automation is often used to reduce human labor for routine or predictable tasks, while A.I. simulates human intelligence that can eventually act independently.

“Artificial intelligence is a way of making workers more productive, and whether or not that enhanced productivity leads to more jobs or less jobs really depends on a field-by-field basis,” said senior advisor Gregory Allen with the Wadhwani A.I. center at the Center for Strategic and International Studies. “Past examples of automation, such as agriculture, in the 1920s, roughly one out of every three workers in America worked on a farm. And there was about 100 million Americans then. Fast forward to today, and we have a country of more than 300 million people, but less than 1% of Americans do their work on a farm.”

A similar trend happened throughout the manufacturing sector. At the end of the year 2000, there were more than 17 million manufacturing workers according to the U.S. Bureau of Labor statistics and the Federal Reserve Bank of St. Louis. As of June, there are 12.7 million workers. Research from the University of Chicago found, while automation had little effect on overall employment, robots did impact the manufacturing sector. 

“Tractors made farmers vastly more productive, but that didn’t result in more farming jobs. It just resulted in much more productivity in agriculture,” Allen said.

ARTIFICIAL INTELLIGENCE DRIVES DEMAND FOR ELECTRIC GRID UPDATE

Researchers are able to analyze the performance of Major League Baseball pitchers by using A.I. algorithms and stadium camera systems. (University of Waterloo / Fox News)

According to our Fox News Polling, just 3% of voters expressed fear over A.I.’s threat to jobs when asked about their first reaction to the technology without a listed set of responses. Overall, 43% gave negative reviews while 26% reacted positively.

Robots now are being trained to work alongside humans. Some have been built to help with household chores, address worker shortages in certain sectors and even participate in robotic sporting events.

The most recent data from the International Federation of Robotics found more than 4 million robots working in factories around the world in 2023. 70% of new robots deployed that year, began work alongside humans in Asia. Many of those now incorporate artificial intelligence to enhance productivity.

“We’re seeing a labor shortage actually in many industries, automotive, transportation and so on, where the older generation is going into retirement. The middle generation is not interested in those tasks anymore and the younger generation for sure wants to do other things,” Arnaud Robert with Hexagon Robotics Division told Reuters.

Hexagon is developing a robot called AEON. The humanoid is built to work in live industrial settings and has an A.I. driven system with special intelligence. Its wheels help it move four times faster than humans typically walk. The bot can also go up steps while mapping its surroundings with 22 sensors.

ARTIFICIAL INTELLIGENCE FUELS BIG TECH PARTNERSHIPS WITH NUCLEAR ENERGY PRODUCERS

gif of AI rendering of pitching throwing a ball

Researchers are able to create 3D models of pitchers, which athletes and trainers could study from multiple angles. (University of Waterloo)

“What you see with technology waves is that there is an adjustment that the economy has to make, but ultimately, it makes our economy more dynamic,” White House A.I. and Crypto Czar David Sacks said. “It increases the wealth of our economy and the size of our economy, and it ultimately improves productivity and wages.”

Driverless cars are also using A.I. to safely hit the road. Waymo uses detailed maps and real-time sensor data to determine its location at all times.

“The more they send these vehicles out with a bunch of sensors that are gathering data as they drive every additional mile, they’re creating more data for that training data set,” Allen said.

Even major league sports are using automation, and in some cases artificial intelligence. Researchers at the University of Waterloo in Canada are using A.I. algorithms and stadium camera systems to analyze Major League Baseball pitcher performance. The Baltimore Orioles joint-funded the project called Pitchernet, which could help improve form and prevent injuries. Using Hawk-Eye Innovations camera systems and smartphone video, researchers created 3D models of pitchers that athletes and trainers could study from multiple angles. Unlike most video, the models remove blurriness, giving a clearer view of the pitcher’s movements. Researchers are also exploring using the Pitchernet technology in batting and other sports like hockey and basketball.

ELON MUSK PREDICTS ROBOTS WILL OUTSHINE EVEN THE BEST SURGEONS WITHIN 5 YEARS

graphic overview of ptichernet system of baseball player's pitching skills

Overview of a PitcherNet System graphics analyzing a pitcher’s baseball throw. (University of Waterloo)

The same technology is also being used as part of testing for an Automated Ball-Strike System, or ABS. Triple-A minor league teams have been using the so-called robot umpires for the past few seasons. Teams tested both situations in which the technology called every pitch and when it was used as challenge system. Major League Baseball also began testing the challenge system in 13 of its spring training parks across Florida and Arizona this February and March.

Each team started a game with two challenges. The batter, pitcher and catcher were the only players who could contest a ball-strike call. Teams lost a challenge if the umpire’s original call was confirmed. The system allowed umpires to keep their jobs, while strike zone calls were slightly more accurate. According to MLB, just 2.6% of calls were challenged throughout spring training games that incorporated ABS. 52.2% of those challenges were overturned. Catchers had the highest success rate at 56%, followed by batters at 50% and pitchers at 41%.

GET FOX BUSINESS ON THE GO BY CLICKING HERE

Triple-A announced last summer it would shift to a full challenge system. MLB commissioner Rob Manfred said in June, MLB could incorporate the automated system into its regular season as soon as 2026. The Athletic reports, major league teams would use the same challenge system from spring training, with human umpires still making the majority of the calls.

Many companies across other sectors agree that machines should not go unsupervised.

“I think that we should always ensure that AI remains under human control,” Microsoft Vice Chair and President Brad Smith said.  “One of first proposals we made early in 2023 was to insure that A.I., always has an off switch, that it has an emergency brake. Now that’s the way high-speed trains work. That’s the way the school buses, we put our children on, work. Let’s ensure that AI works this way as well.”



Source link

Continue Reading

Trending