Elon Musk announced that his AI company, xAI, plans to retrain its Grok model using a newly curated knowledge base—one stripped of what he calls “garbage” and “uncorrected data”—by first using the AI to revise history itself.
In a post on X Saturday, Musk said the forthcoming Grok 3.5 would feature “advanced reasoning” capabilities and would be tasked with “rewriting the entire corpus of human knowledge,” correcting inaccuracies and filling in gaps.
Once this revised dataset is created, Grok will be retrained on it. Musk argued that existing foundation models suffer from being trained on flawed, unfiltered data.

Musk’s renewed battle against “woke culture”
Elon Musk has frequently criticized rival AI models—such as OpenAI’s ChatGPT, which he co-founded—for being politically biased and filtering out content that doesn’t align with mainstream narratives.
For years, he’s pushed to develop products free from what he views as the constraints of political correctness, branding his own AI model, Grok, as “anti-woke.”
When he acquired Twitter in 2022, Musk rolled back many of the platform’s content and misinformation moderation policies. This led to a surge of unverified conspiracy theories, extremist content, and fake news—some of which Musk himself shared.
To combat misinformation, he introduced a feature called “Community Notes,” enabling users to collaboratively add context or corrections to prominent posts that may be misleading.
Criticism directed at Grok’s retraining process
Musk’s announcement drew sharp criticism, including from Gary Marcus, AI entrepreneur and professor emeritus of neural science at NYU, who likened the plan to something out of a dystopian novel.
“Straight out of 1984,” Marcus posted on X. “You couldn’t get Grok to reflect your personal beliefs, so now you’re going to rewrite history to make it fit your worldview.”

Bernardino Sassoli de’ Bianchi, a professor of logic and philosophy of science at the University of Milan, expressed deep concern over Musk’s plan in a LinkedIn post, saying he was “at a loss for words to comment on how dangerous” it is.
“When powerful billionaires start treating history as something flexible just because it doesn’t fit their worldview, this isn’t innovation anymore — it’s narrative manipulation,” he wrote. “Altering training data to reflect an ideology is wrong on every possible level.”
Musk’s push for “factual” content fuels conspiracy theories and misinformation
As part of his plan to revamp Grok, Musk urged X users to submit “divisive facts” to help train the AI—specifically asking for content that is “politically incorrect, but nonetheless factually true.”
The callout quickly drew responses filled with conspiracy theories and widely debunked extremist claims, including Holocaust distortion, vaccine misinformation, racist pseudoscience about intelligence, and climate change denial.

