Anthropic Tops OpenAI in Enterprise AI Share
Enterprise AI has a new leader. Anthropic now commands 32% of enterprise usage across production workloads, overtaking OpenAI’s position as businesses prioritize performance and safety over first-mover advantage in their AI deployments.
The shift marks a dramatic reversal from just two years ago. OpenAI dominated the enterprise LLM market in 2023, accounting for 50% of usage, while Anthropic had a 12% share. Today, OpenAI’s share has now fallen to 25%, according to Menlo Ventures’ 2025 Mid-Year LLM Market Update, released July 31.
This market transformation comes as enterprise spending on large language models has more than doubled in just six months, rising from $3.5 billion in late 2023 to $8.4 billion by the end of 2023. Companies aren’t just experimenting anymore—they’re deploying AI at scale.
In a bid to be more enterprise friendly OpenAI had updated its enterprising pricing approach.
Code Generation Drives Anthropic’s Success
Anthropic’s breakthrough came through an unexpected door: helping developers write code. Claude quickly became the developer’s top choice for code generation, capturing a 42% market share, more than double that of OpenAI’s 21%.
This coding prowess has yielded tangible business results. Claude Sonnet 3.5’s release in June 2024 not only impressed developers but also created entirely new markets. AI-powered integrated development environments, such as Cursor and Windsurf, emerged. App builders such as Lovable, Bolt, and Replit gained traction. The coding assistance market alone has grown to $1.9 billion.
“When you solve a real problem exceptionally well, the market follows,” appears to be Anthropic’s playbook. And the market has certainly followed.
Revenue Growth Tells the Story
The financial numbers reveal just how quickly things have changed. Anthropic’s annualized revenue increased from $1 billion in December 2024 to $4 billion by July 2025. That’s a 300% increase in seven months—growth rates that would make it the fastest-scaling enterprise software company ever analyzed.
Some reports suggest that the company’s revenue run rate has continued to climb toward $5 billion by late July 2025. This isn’t normal growth. This is what happens when a technology meets an urgent, unmet need.
Meanwhile, OpenAI’s annual revenue nearly doubled in the first seven months of 2025, reaching $12 billion in annualized revenue. While impressive by any standard, OpenAI’s growth appears to be driven more by its 700 million weekly active ChatGPT users than by its dominance in the enterprise sector.
Big Names Sign Big Deals
Enterprise partnerships tell their own story about market confidence. Databricks and Anthropic struck a five-year, $100 million pact to sell artificial-intelligence tools to businesses. This integration brings Claude models directly into Databricks’ Data Intelligence Platform, serving over 10,000 organizations, including Block, Comcast, Condé Nast, Rivian, and Shell.
Deloitte announced today the launch of a Generative AI and Advanced AI Applications Certification Program in collaboration with Anthropic, targeting the certification of 15,000 Deloitte practitioners globally. This partnership, part of Deloitte’s $1.4 billion Project 120 investment, combines Deloitte’s Trustworthy AI framework with Anthropic’s Constitutional AI approach.
These aren’t experimental pilots. They’re strategic commitments from major enterprises betting on Anthropic’s approach to AI safety and performance.
The Wider Market Reshapes
Google has taken third place, claiming 20% of enterprise usage with strong adoption of its Gemini models. Meta’s Llama holds 9%, while DeepSeek accounts for just 1% of enterprise usage.
The market has also spoken clearly about the advantages of open versus closed models. Closed-source models now dominate, powering 87% of enterprise workloads. Open-source usage fell from 19% to 13% over the past six months. This consolidation reflects widening performance gaps and enterprise reluctance to use APIs from Chinese companies that have produced many recent top-performing open-source models.
Performance Beats Price Every Time
Here’s what’s fascinating about enterprise AI adoption: companies don’t shop for bargains. Only 11% of teams report changing model providers in the past year. However, 66% upgraded to newer models from their existing vendor.
When Claude 4 was released, it captured 45% of Anthropic users within one month while Sonnet 3.5’s share decreased from 83% to 16%. Even as individual models drop 10x in price, builders don’t capture savings by using older models—they migrate en masse to whatever performs best.
This behavior explains why 74% of startups and 49% of enterprises report that inference accounts for the majority of their compute usage. Companies have moved from testing AI to running it in production, where performance directly impacts their bottom line.
Enterprise Pricing Reflects Premium Value
Anthropic’s confidence shows in its pricing. The Claude Enterprise plan is a yearly commitment of $60 per seat, per month, with a minimum of 70 users. That’s a minimum annual investment of $50,000.
This pricing strategy makes sense when you consider what enterprises get: single sign-on, role-based access controls, expanded 500K context windows, and native GitHub integration. For companies where AI has become mission-critical, these features justify the premium.
Regulation Shapes the Landscape
The regulatory environment is accelerating enterprise consolidation around trusted providers. The EU AI Act categorizes AI systems by risk level with corresponding compliance requirements. Non-compliance could result in fines of up to €35 million or 7% of the company’s global revenue.
Organizations must now conduct risk assessments, maintain detailed documentation, ensure robust human oversight, and implement governance frameworks for high-risk AI applications. This regulatory pressure creates opportunities for safety-focused providers like Anthropic, whose Constitutional AI approach aligns with emerging compliance requirements.
What This Means for Enterprise AI
The market has spoken, and it values performance and safety over being first. Anthropic’s rise demonstrates that in enterprise AI, solving specific problems exceptionally well beats general-purpose solutions.
Menlo Ventures projects enterprise LLM spending will reach $13 billion by year-end 2025. As this market matures, we’re seeing clear patterns emerge. Enterprises want the best models, not the cheapest. They prefer proven closed-source solutions over experimental open-source alternatives. They are willing to pay premium prices for features that ensure security, compliance, and seamless integration with existing workflows.
The rapid shift from OpenAI to Anthropic also illustrates how quickly fortunes can change in the AI industry. Two years ago, OpenAI seemed unassailable. Today, it’s fighting to regain lost ground in the enterprise market it once dominated.
For businesses evaluating AI providers, the message is clear: look beyond hype to actual performance, especially in your specific use cases. And prepare for a market that will continue evolving at breakneck speed.
Key Takeaways:
- Anthropic has captured 32% of the enterprise AI market share, overtaking OpenAI’s fallen 25% share in a dramatic two-year reversal.
- Enterprise spending on LLMs has more than doubled to $8.4 billion in six months, projected to reach $13 billion by year-end 2025
- Claude dominates code generation with a 42% market share, creating a $1.9 billion coding assistance market.
- Anthropic’s revenue exploded from $1 billion to $4 billion in seven months, while OpenAI reached $12 billion primarily through consumer growth.
- Major enterprise deals include Databricks’ $100 million partnership and Deloitte’s commitment to certify 15,000 professionals on Claude.
- Enterprises prioritize performance over cost, with 66% upgrading to newer models rather than switching providers for savings
