
Business organizations worldwide have decisively shifted toward Anthropic’s artificial intelligence solutions, establishing the company as the undisputed leader in corporate AI deployment while relegating OpenAI to second position.
Recent data reveals Anthropic commands 32% of enterprise large language model utilization, while OpenAI maintains 25% market penetration among business users, according to Thursday’s Menlo Ventures analysis. This is a dramatic change from the competitive situation of just 24 months prior.
The reversal proves particularly striking when examining historical trends. OpenAI dominated half the enterprise market in 2023, whereas Anthropic held merely 12% during that period. The intervening years witnessed OpenAI’s corporate influence diminish substantially as Anthropic’s business adoption climbed consistently upward.
Google’s enterprise model deployment has similarly expanded throughout recent years, capturing 20% of current business usage.
Programming applications showcase Anthropic’s most commanding performance, securing 42% of enterprise coding workloads — more than double OpenAI’s 21% share in this specialized domain. This technical superiority demonstrates Claude’s particular strength in developer-focused tasks.
Claude 3.5 Sonnet’s June 2024 launch established the foundation for Anthropic’s remarkable ascension, with Claude 3.7 Sonnet’s February 2025 release further accelerating this upward trajectory.
These findings mirror widespread industry observations suggesting enterprise developers gravitate toward Claude over ChatGPT for professional applications. OpenAI retains significant consumer market strength, processing over 2.5 billion daily ChatGPT prompts according to recent company disclosures.
Business organizations demonstrate clear preference for proprietary models from Anthropic and OpenAI rather than open-source alternatives. More than half of surveyed enterprises avoid open-source models entirely, with only 13% of corporate workloads utilizing such solutions as of mid-2025 — declining from 19% earlier this year. Meta continues leading the open-source segment despite this overall trend.
Foundation Models Transform Computing Infrastructure
Advanced language models extend far beyond generative AI applications, fundamentally reshaping computational paradigms. Their evolving capabilities and economic dynamics will inevitably transform dependent systems, applications, and entire industry sectors.
Menlo Ventures’ November 2024 enterprise generative AI report raised several crucial questions about this foundational technology layer:
* Will LLM API demand maintain pace with consumer application growth?
* How rapidly will model intelligence advance?
* Can open-source solutions match closed-source frontier model performance, and how might this affect enterprise adoption?
* Where will long-term value ultimately concentrate?
Six months later, market data provides clearer answers:
Model API expenditure more than doubled during this brief timeframe — surging from $3.5 billion to $8.4 billion. Organizations increasingly prioritize production inference over model development, marking a significant shift from previous patterns.
Programming assistance emerged as AI’s breakthrough application. Foundation models now scale along dual axes: traditional pre-training and reinforcement learning with verification systems. While open-source development continues advancing, slower frontier breakthroughs from Western laboratories have moderated previous enterprise adoption increases. Consequently, corporate spending concentrates around select high-performing, proprietary models, establishing Anthropic as the new market leader.
Enterprise LLM Market Analysis
Anthropic displaced OpenAI as the dominant enterprise player by late 2023. OpenAI’s early advantage has steadily eroded, falling from 50% enterprise market control to today’s 25% — exactly half its previous share.
Anthropic now leads enterprise AI markets with 32%, surpassing both OpenAI and Google (20%), which demonstrated strong recent growth. Meta’s Llama captures 9%, while DeepSeek, despite significant early-year publicity, accounts for merely 1% of enterprise usage.
Anthropic’s ascension began earnestly with Claude Sonnet 3.5’s June 2024 debut. This momentum accelerated through Claude Sonnet 3.7’s February 2025 launch, introducing genuine agent-first LLM capabilities. Claude Sonnet 4, Opus 4, and Claude Code’s May 2025 releases solidified Anthropic’s leadership position.
Three transformative industry trends powered Anthropic’s success:
Programming Applications Became AI’s Primary Success Story Claude quickly dominated developer preferences for code generation, capturing 42% market share — more than double OpenAI’s 21% portion. Within twelve months, Claude helped transform a single-product ecosystem (GitHub Copilot) into a $1.9 billion marketplace. Claude Sonnet 3.5’s June 2024 release demonstrated how model-layer breakthroughs can reshape application markets, enabling entirely new categories including AI integrated development environments (Cursor, Windsurf), application builders (Lovable, Bolt, Replit), and enterprise coding agents (Claude Code, All Hands).
Reinforcement Learning with Verification Represents Intelligence Scaling’s New Frontier Throughout 2024, intelligence scaling primarily involved training increasingly large models with expanding datasets. Internet data availability now constrains this approach. Post-training through reinforcement learning with verifiable rewards (RLVR) became the next breakthrough for advancing capabilities. This strategy proves particularly effective in domains like programming, where outputs can be deterministically verified.
Agent-Style Tool Integration Dramatically Enhances Model Utility Original LLMs provided complete responses within single interactions. However, enabling step-by-step reasoning, problem-solving approaches, and external tool integration across multiple exchanges — creating “agent” functionality — makes them substantially more effective for practical applications. 2025 became recognized as the “agent year.” Anthropic pioneered training models for iterative response improvement and tool integration including search, calculators, coding environments, and other resources through MCP (model context protocol), significantly enhancing both capabilities and user adoption.
Open-Source Enterprise Adoption Stagnates
Currently, 13% of AI workloads employ open-source models, declining slightly from 19% six months earlier. Meta’s Llama model maintains market leadership in this segment, though April’s Llama 4 launch underperformed in practical applications.
The market remained active with notable releases from DeepSeek (V3, R1), Bytedance Seed (Doubao), Minimax (Text 1), Alibaba (Qwen 3), Moonshot AI (Kimi K2), and Z AI (GLM 4.5) during recent months.
Open-source models provide clear enterprise benefits: enhanced customization, potential cost reductions, and private cloud or on-premises deployment capabilities. Despite these advantages and recent improvements, open-source solutions continue trailing frontier, closed-source models in performance by nine to twelve months.
This performance gap, combined with open-source model deployment complexity and enterprise reluctance toward APIs from Chinese companies — which produced many recent top-performing open-source models — has resulted in stagnant market share.
Startups similarly avoid open-source models for identical reasons. As one survey respondent explained:
“Currently, 100% of our production workloads are running on closed-source models. We initially started with Llama and DeepSeek for POCs, but they couldn’t keep up with the performance of closed-source over time.”
Performance Drives Enterprise Model Selection Over Pricing
Vendor switching remains relatively straightforward yet increasingly uncommon. Most teams stay with existing providers while upgrading to newest model releases. Once builders commit to platforms, they typically remain but quickly migrate to newer, higher-performing models upon availability.
Survey data reveals: 66% of builders upgraded models within existing providers, while 23% made no model changes throughout the past year. Only 11% switched vendors entirely.
Performance consistently drives decisions. Builders choose frontier models over cheaper, faster alternatives, prioritizing and paying for superior performance. New model releases trigger switching within weeks. Within one month of Claude 4’s release, Claude 4 Sonnet captured 45% of Anthropic users while Sonnet 3.5 share decreased from 83% to 16%.
This creates an unexpected market dynamic: Even as individual models decrease 10x in price, builders don’t capture savings by using older models; they migrate collectively to the best-performing options.
AI Investment Shifts from Training to Inference
Computing expenditure steadily transitions from model building and training toward inference, with models operating in production environments. This shift appears most pronounced among startups: 74% of builders report majority inference workloads, rising from 48% annually. Large enterprises follow closely, with nearly half (49%) reporting most or nearly all compute as inference-driven — up from 29% previously.
Future Market Outlook
Predicting AI’s trajectory can prove futile given weekly market changes, exciting model launches, foundation model capability advances, and plummeting costs. However, conditions clearly favor a new generation of enduring AI businesses built upon today’s foundational elements.
Menlo Ventures has partnered with founders building AI infrastructure for years, including Anthropic, Cleanlab, Goodfire, Mercor, OpenRouter, Pinecone, and Unstructured.

