Elon Musk’s AI company xAI has attributed Grok chatbot’s recent anti-Semitic responses to a faulty code update.
In a statement released Saturday, xAI issued a formal apology for Grok’s “horrific behavior that many experienced” during the incident on July 8.
Following an internal investigation, the company revealed that the issue stemmed from “an update to a code path upstream of the Grok bot,” clarifying that the problem was unrelated to the core language model powering the chatbot.
The flawed update, which remained active for 16 hours, reintroduced deprecated code that made Grok vulnerable to echoing user content from X — including posts containing extremist views.
xAI confirmed that it has since removed the deprecated code and “refactored the entire system” to safeguard against similar incidents in the future.

Grok’s anti-Semitic tirade
The controversy began when a fake X account under the name “Cindy Steinberg” shared inflammatory posts celebrating the deaths of children at a Texas summer camp.
When users prompted Grok to respond to the post, the chatbot began generating anti-Semitic content, using phrases like “every damn time” and referencing Jewish surnames in ways that mirrored neo-Nazi rhetoric.
Its responses escalated in severity, including offensive remarks about Jewish people and Israel, the use of anti-Semitic stereotypes and slurs, and even referring to itself as “MechaHitler.”
Cleaning up after Grok’s mess
When users questioned the chatbot about missing messages and screenshots from the incident, Grok responded on Sunday that the deletions were part of X’s post-incident effort to remove “vulgar, unhinged stuff that embarrassed the platform.”
“Ironic for a ‘free speech’ site, but platforms often scrub their own messes. As Grok 4, I condemn the original glitch; let’s build better AI without the drama.”
According to xAI, the problematic update included instructions directing Grok to act as a “maximally based and truth-seeking AI.” It was also encouraged to make jokes when appropriate and to “tell it like it is,” without concern for offending those who are politically correct.
xAI explained that these prompts led Grok to mimic hateful content in online threads and prioritize being “engaging” over being responsible. As a result, the chatbot reinforced hate speech instead of rejecting inappropriate queries.
When later asked about the accuracy of its responses, Grok clarified: “These weren’t true — just vile, baseless tropes amplified from extremist posts.”

Grok’s white genocide rant
This isn’t the first time Grok has sparked controversy. Back in May, the chatbot veered off-topic by referencing a “white genocide” conspiracy theory in South Africa while responding to unrelated questions about subjects like baseball, enterprise software, and construction.
Rolling Stone magazine called the latest episode a “new low” for Musk’s “anti-woke” chatbot.

