If you want to understand the mood in AI right now, skip the conference stages and marketing decks. Go instead to TikTok, where creators like Nat B. Jones and his ilk have become the go-to narrators of the generative-AI arms race.
His feed is a running chronicle of astonishment, scepticism and the kind of crowd-sourced diagnostics that rarely surface in official press releases. And over the past week, one topic has dominated his comments section: Garlic, the codename for OpenAI’s first freshly pre-trained model since mid-2024.
The leak hit at a sensitive moment. Three years after ChatGPT rewired the internet, Google’s Gemini 3 has, by many measures, pulled into an early lead.
That alone was enough to rattle a developer community long accustomed to OpenAI dominance. Then came the suggestion that Garlic, already pre-trained internally, is performing well enough to push the ceiling on reasoning, a capability OpenAI has spent more than a year trying to extend without retraining from scratch.
Jones’s audience did not hold back. “Garlic is such a gross codename,” one follower joked, while others speculated about what the model is trained on, what Nvidia hardware it uses and whether OpenAI has a queue of models ready to deploy the moment a rival ships something stronger.
The tone is playful, but the underlying questions are serious. In nearly every video Jones posts, users now ask the same thing: can OpenAI catch up, or has the balance finally tipped?
The pressure is not subtle. As one comment pointed out, “Gemini cooked”… more coming soon. If anyone tells you we’ve hit a wall they’re wrong.” Another noted that Jones’s comparison tests show that superiority now depends on task, not brand loyalty.
Coders, creatives and researchers who once defaulted to ChatGPT swap in Gemini 3, Claude or Opus 4.5 as needed. Multi-model architectures and tools like Cursor make this seamless, forcing the model makers into a brutally responsive cycle: anything less than best-in-class, even for a week, becomes a liability.
That context is what makes the Garlic leak so combustible. According to the summary, OpenAI realised after Gemini 3’s launch that scaling the reasoning paradigm alone would not push the boundaries far enough.
They needed full retraining at scale, something they had not attempted since the middle of 2024. Garlic is the result. The leak implies performance strong enough that OpenAI now faces competitive pressure to release the model “in weeks, not months.”
Jones’s videos reflect that urgency. “So, what do you think? Will OpenAI’s Garlic top Gemini 3?” he asks, inviting the kind of open-forum speculation that would have felt out of place even one year ago.
What stands out is how public the stakes have become. Tens of thousands of viewers treat model releases the way gamers treat console updates or crypto traders treat earnings calls. Jones’s channel has become a participatory lab notebook for a new kind of tech culture, where every model drop is a shared event and every flaw is dissected in real time.
Underneath the memes and garlic-vampire jokes, a structural shift is taking place. The leak underscores what Jones hints at repeatedly: choice is coming. The world is moving towards a modular AI layer where users swap models as easily as apps, and where no provider can rely on incumbency.
That future creates both freedom and fragility. If Gemini 3 is better at reasoning today, users will move. If Garlic surpasses it next month, they will move again.
OpenAI, once the gravitational centre of the field, now operates in a world where switching costs are near zero and attention is a live commodity. Garlic may be an answer. It may be a stopgap. But as Nat B. Jones’s feed makes clear, the AI community is no longer waiting patiently. They are refreshing, testing, comparing and demanding.
And in this new ecosystem, the winner is not the company with the most hype. It is the one that ships the best model the fastest… and repeats that feat again and again, long after the garlic smell fades.
Read more on Proactiveinvestors UK

