MarketAlert – Real-Time Market & Crypto News, Analysis & AlertsMarketAlert – Real-Time Market & Crypto News, Analysis & Alerts
Font ResizerAa
  • Crypto News
    • Altcoins
    • Bitcoin
    • Blockchain
    • DeFi
    • Ethereum
    • NFTs
    • Press Releases
    • Latest News
  • Blockchain Technology
    • Blockchain Developments
    • Blockchain Security
    • Layer 2 Solutions
    • Smart Contracts
  • Interviews
    • Crypto Investor Interviews
    • Developer Interviews
    • Founder Interviews
    • Industry Leader Insights
  • Regulations & Policies
    • Country-Specific Regulations
    • Crypto Taxation
    • Global Regulations
    • Government Policies
  • Learn
    • Crypto for Beginners
    • DeFi Guides
    • NFT Guides
    • Staking Guides
    • Trading Strategies
  • Research & Analysis
    • Blockchain Research
    • Coin Research
    • DeFi Research
    • Market Analysis
    • Regulation Reports
Reading: Gary Marcus Calls the AI Boom a ‘Scam’ — and the Numbers Are Starting to Back Him Up
Share
Font ResizerAa
MarketAlert – Real-Time Market & Crypto News, Analysis & AlertsMarketAlert – Real-Time Market & Crypto News, Analysis & Alerts
Search
  • Crypto News
    • Altcoins
    • Bitcoin
    • Blockchain
    • DeFi
    • Ethereum
    • NFTs
    • Press Releases
    • Latest News
  • Blockchain Technology
    • Blockchain Developments
    • Blockchain Security
    • Layer 2 Solutions
    • Smart Contracts
  • Interviews
    • Crypto Investor Interviews
    • Developer Interviews
    • Founder Interviews
    • Industry Leader Insights
  • Regulations & Policies
    • Country-Specific Regulations
    • Crypto Taxation
    • Global Regulations
    • Government Policies
  • Learn
    • Crypto for Beginners
    • DeFi Guides
    • NFT Guides
    • Staking Guides
    • Trading Strategies
  • Research & Analysis
    • Blockchain Research
    • Coin Research
    • DeFi Research
    • Market Analysis
    • Regulation Reports
Have an existing account? Sign In
Follow US
© Market Alert News. All Rights Reserved.
  • bitcoinBitcoin(BTC)$76,361.00-0.71%
  • ethereumEthereum(ETH)$2,291.520.08%
  • tetherTether(USDT)$1.000.00%
  • rippleXRP(XRP)$1.38-0.76%
  • binancecoinBNB(BNB)$623.950.00%
  • usd-coinUSDC(USDC)$1.00-0.01%
  • solanaSolana(SOL)$83.78-0.50%
  • tronTRON(TRX)$0.322815-1.00%
  • Figure HelocFigure Heloc(FIGR_HELOC)$1.040.94%
  • dogecoinDogecoin(DOGE)$0.0996991.79%
Press Releases

Gary Marcus Calls the AI Boom a ‘Scam’ — and the Numbers Are Starting to Back Him Up

Last updated: March 1, 2026 7:05 am
Published: 2 months ago
Share

For years, Gary Marcus has been the most persistent skeptic in artificial intelligence, a cognitive scientist and NYU professor emeritus who has repeatedly warned that the hype surrounding large language models is outpacing reality. Now, in a blistering new essay on his Substack, Marcus goes further than ever, declaring flatly: “The whole thing was a scam.”

It is a provocative claim, one that will strike many in Silicon Valley as hyperbolic. But Marcus’s argument, laid out in detail in his June 2025 Substack post, draws on a growing body of evidence — from disappointing revenue figures and failed product launches to internal admissions from AI companies themselves — that the generative AI industry has been selling a vision far grander than what the technology can actually deliver. Whether or not one agrees with the word “scam,” the underlying data points he marshals deserve serious scrutiny from investors, executives, and policymakers alike.

A Cognitive Scientist’s Long-Running Warning

Marcus has been sounding the alarm since well before ChatGPT captured the world’s attention in late 2022. His 2019 critiques of deep learning’s limitations and his subsequent book, Rebooting AI, co-authored with Ernest Davis, argued that neural networks, no matter how large, would struggle with reliability, reasoning, and truthfulness. At the time, many in the AI community dismissed him as a contrarian. OpenAI’s Sam Altman, Google DeepMind’s Demis Hassabis, and a parade of venture capitalists insisted that scale — more data, more compute, more parameters — would solve the remaining problems.

But as Marcus points out in his latest essay, the problems have not been solved. Large language models still hallucinate — generating plausible-sounding but factually wrong outputs — at rates that make them unreliable for high-stakes applications. They still struggle with basic reasoning tasks that any human child can handle. And the promised path to artificial general intelligence (AGI) remains, by any honest assessment, nowhere in sight. “They told us scaling would fix everything,” Marcus writes. “It didn’t.”

The Revenue Gap That Won’t Close

Perhaps the most damning thread in Marcus’s argument is financial. The AI industry has attracted hundreds of billions of dollars in investment on the promise that generative AI would transform every sector of the economy. OpenAI alone has raised over $30 billion and was recently valued at $300 billion. Microsoft has committed more than $13 billion to its OpenAI partnership. Google, Amazon, and Meta have each pledged tens of billions in AI-related capital expenditure.

Yet the revenue picture remains stubbornly thin relative to these outlays. As reported by The Wall Street Journal, OpenAI’s annualized revenue reportedly reached roughly $5 billion in early 2025 — impressive growth, but a fraction of what would be needed to justify its valuation, especially given the enormous costs of running inference at scale. Marcus highlights this gap explicitly, noting that the ratio of investment to actual economic value creation is wildly out of proportion. “This isn’t a business,” he writes. “It’s a faith-based initiative.”

Product Failures and Enterprise Disappointment

The essay also catalogs a series of high-profile product stumbles. Google’s AI Overviews feature, which was supposed to transform search, generated widespread ridicule after telling users to put glue on pizza and eat rocks. Microsoft’s Copilot, embedded across its Office suite, has faced lukewarm enterprise adoption, with multiple surveys showing that many corporate users find the tool unreliable or unhelpful for their actual workflows. A BBC report from early 2025 documented how businesses were scaling back AI pilots after failing to see the promised productivity gains.

Marcus argues that these are not isolated failures but symptoms of a fundamental architectural limitation. Large language models are, at their core, pattern-matching systems trained on text. They do not understand the world in any meaningful sense. They cannot reliably follow multi-step instructions, maintain consistent internal states, or distinguish fact from fiction. “You can put lipstick on a stochastic parrot,” Marcus quips, borrowing the famous phrase from Emily Bender and Timnit Gebru’s 2021 paper, “but it’s still a stochastic parrot.”

The ‘Scam’ Framing: Deliberate Deception or Collective Delusion?

The word “scam” is doing heavy lifting in Marcus’s essay, and he acknowledges this. He distinguishes between outright fraud — which he does not allege — and what he calls a systematic pattern of overpromising, goalpost-moving, and strategic ambiguity. AI company leaders, he argues, have repeatedly made claims they knew or should have known were misleading. When Sam Altman told Congress in 2023 that AI regulation was needed because the technology was so powerful, Marcus sees a calculated move to inflate perceived capability. When companies announce benchmarks showing superhuman performance, they often fail to mention that those benchmarks do not translate to real-world reliability.

Marcus points to a pattern he calls “the AGI shell game.” Companies hype the imminent arrival of AGI to attract investment and talent, then quietly redefine AGI when the goalposts aren’t met. OpenAI’s internal definition of AGI — systems that can perform “most economically valuable work” — is vague enough to be almost unfalsifiable. Meanwhile, the actual products shipped to consumers and enterprises fall far short of even modest expectations. “They’re selling you a future that doesn’t exist,” Marcus writes, “and charging you for the present, which doesn’t work.”

Wall Street Begins to Ask Harder Questions

Marcus is no longer alone in his skepticism. In recent months, a growing chorus of analysts and investors have begun to raise concerns about AI valuations and the gap between promises and delivery. Sequoia Capital partner David Cahn published an influential analysis estimating that AI companies would need to generate $600 billion in annual revenue just to cover the cost of their infrastructure investments — a figure that dwarfs current industry revenues. Sequoia’s analysis sent ripples through the investment community and validated many of the concerns Marcus has been voicing for years.

Goldman Sachs, too, has published research questioning whether generative AI will deliver the productivity gains its proponents claim. A mid-2024 report from the bank featured MIT economist Daron Acemoglu arguing that AI’s near-term economic impact would be far more modest than the industry’s boosters suggest. Acemoglu estimated that AI would increase U.S. productivity by only 0.5% over the next decade — a far cry from the transformative claims made by tech CEOs.

The Human Cost of Overinflated Expectations

Beyond the financial implications, Marcus raises concerns about the human toll of AI hype. Workers across industries have been told their jobs are about to be automated, creating anxiety and, in some cases, premature layoffs. Companies have cut staff and replaced them with AI tools that, in practice, require extensive human oversight — sometimes more labor than the original workflow. The New York Times has reported on multiple cases where companies that rushed to adopt AI found themselves rehiring human workers after the technology failed to perform as advertised.

There is also the question of public trust. If the AI industry’s grand promises go unfulfilled, the resulting backlash could set back legitimate AI research for years. Marcus draws a parallel to the “AI winters” of the 1970s and 1990s, when inflated expectations led to funding collapses and widespread disillusionment. “History doesn’t repeat,” he writes, “but it sure does rhyme.”

Defenders of the Faith Push Back

Not everyone agrees with Marcus’s framing. Prominent AI researchers and industry leaders have pushed back, arguing that the technology is still in its early stages and that comparing current capabilities to long-term potential is unfair. Yann LeCun, Meta’s chief AI scientist, has repeatedly argued on X (formerly Twitter) that current models are a stepping stone, not the final destination. Anthropic CEO Dario Amodei has acknowledged limitations but insists that progress is accelerating, not stalling.

Venture capitalist Marc Andreessen has been even more forceful, dismissing AI skeptics as modern-day Luddites who fail to appreciate the compounding nature of technological progress. In a widely shared post, Andreessen argued that every major technology platform — from the internet to mobile — faced similar criticism in its early years before going on to generate trillions in value. The implication is that patience, not panic, is the appropriate response.

Where the Evidence Actually Points

The truth, as is often the case, likely lies somewhere between Marcus’s stark “scam” framing and the industry’s most optimistic projections. Large language models are genuinely useful for certain tasks: drafting text, summarizing documents, generating code snippets, and assisting with creative brainstorming. Millions of people use ChatGPT and its competitors daily and find real value in doing so.

But the gap between “useful tool” and “transformative technology that justifies hundreds of billions in investment” is enormous. Marcus’s core argument — that the industry has systematically overpromised and that the underlying technology has fundamental limitations that scaling alone will not fix — is supported by a growing body of evidence. The question facing investors, executives, and regulators is not whether AI is worthless, but whether the current level of investment and hype is proportionate to what the technology can actually do.

As Marcus concludes in his essay: “I’m not saying AI is useless. I’m saying the story they told you — about AGI around the corner, about every job being automated, about trillion-dollar markets materializing overnight — was never true. And a lot of people are going to lose a lot of money before they figure that out.”

Whether Marcus will ultimately be vindicated or proven wrong remains an open question. But the fact that mainstream financial institutions are now echoing his concerns suggests that the AI industry’s era of uncritical enthusiasm may be drawing to a close. The next chapter will be written not by press releases and benchmark scores, but by balance sheets and real-world results.

Read more on WebProNews

This news is powered by WebProNews WebProNews

Share this:

  • Share on X (Opens in new window) X
  • Share on Facebook (Opens in new window) Facebook

Like this:

Like Loading...

Related

Charges Dropped Against Andrew Lloyd, Beaver PA
EQS-NVR: Adtran Holdings, Inc.: Release according to Article 41 of the WpHG [the German Securities Trading Act] with the objective of Europe-wide distribution
BlockDAG Gains Edge Over Binance Coin and Tron With 2,660% ROI Setup
VAALCO Energy Stock: The Production Growth Begins (NYSE:EGY)
technotrans SE / DE000A0XYGA7

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Email Copy Link Print
Previous Article Indiana politicians react to US military operations in Iran
Next Article ABBA Fever Again: Why Everyone’s Talking Right Now
© Market Alert News. All Rights Reserved.
Welcome Back!

Sign in to your account

Username or Email Address
Password

Prove your humanity


Lost your password?

%d