So, here’s another one, decidedly from the pessimistic camp. It’s a take from independent research firm the MacroStrategy Partnership, which advises 220 institutional clients, in a note written by analysts including Julien Garran, who previously led UBS’s commodities strategy team.
Let’s start with the boldest claim first — it’s not just that AI is in a bubble, but one 17 times size the dot-com bubble, and even four times bigger than the 2008 global real estate one.
And to get that number, you have to go way back to 19th century Swedish economist Knut Wicksell. Wicksell’s insight was that capital was efficiently allocated when the cost of debt to the average corporate borrower was two percentage points above nominal GDP. Only now is that positive after a decade of Fed quantitative easing pushed corporate bond spreads low.
He then calculates the Wicksellian deficit, which to be clear is not only AI spending but also includes housing and office real estate, NFTs and venture-capital. That’s how you get this chart on misallocation — a lot of variables, but think of it as the misallocated portion of GDP fueled by artificially low interest rates.
But he also took aim at the large language models themselves. For instance, he highlights one study showing the task completion rate at a software company ranged from 1.5% to 34%; and even for the tasks that were completed 34% of the way, that level of completion could not be consistently reached. Another chart, previously circulated by Apollo economist Torsten Slok based on Commerce Department data, showed the AI adoption rate at big companies now on the decline. He also showed some of his real-world tests, like asking an image maker to create a chessboard one move before white wins, which it didn’t come close to achieving.
LLMs, he argues, already are at the scaling limits. “We don’t know exactly when LLMs might hit diminishing returns hard, because we don’t have a measure of the statistical complexity of language. To find out whether we have hit a wall we have to watch the LLM developers. If they release a model that cost 10x more, likely using 20x more compute than the previous one, and it’s not much better than what’s out there, then we’ve hit a wall,” he says.
And that’s what has happened: ChatGPT-3 cost $50 million, ChatGPT-4 cost $500 million and ChatGPT-5, costing $5 billion, was delayed and when released wasn’t even noticeably better than the last version. It’s also easy for competitors to catch up.
“So, in summary; you can’t create an app with commercial value as it is either generic (games etc), which won’t sell, or it is regurgitated public domain (homework), or it is subject to copyright. It’s hard to advertise effectively, LLMs cost an exponentially larger amount to train each generation, with a rapidly diminishing gain in accuracy. There’s no moat on a model, so there’s little pricing power. And the people who use LLMs the most are using them to access compute that costs the developer more to provide than their monthly subscriptions,” he says.
His conclusion is very stark: not just that an economy already at stall speed will fall into recession as both the data-center and wealth effects plateau, but that they’ll reverse, just as it did in the dot-com bubble in 2001.
“The danger is not only that this pushes us into a zone 4 deflationary bust on our investment clock, but that it also makes it hard for the Fed and the Trump administration to stimulate the economy out of it. This means a much longer effort at reflation, a bit like what we saw in the early 1990s, after the S&L crisis, and likely special measures as well, as the Trump administration seeks to devalue the US$ in an effort to onshore jobs,” he says.
The firm’s investment recommendations are to be overweight resources and emerging markets — India and Vietnam in particular — and underweight the AI and platform companies. They also recommend being long gold equities.

