The latest valuation of Anthropic at $380 billion, following a $30 billion funding round, highlights the explosive growth of the AI sector. But behind the numbers lies a growing vulnerability: the high cost structures of these companies, which make them susceptible to disruption by cheaper alternatives, as predicted by innovation theory.
High Costs and Slow Returns
Anthropic and OpenAI, two of the most prominent players in the AI space, are spending billions on salaries and infrastructure. OpenAI alone spent $6 billion on stock-based compensation in 2025, nearly half of its revenue. This is in stark contrast to most tech companies, where such figures are closer to 6%.
OpenAI has also committed to spending $1.4 trillion on data centers in the coming years, much of it on GPU clusters that depreciate quickly. The company projects $14 billion in losses for 2026 and $115 billion in cumulative losses by 2029. This financial burden is difficult to unwind, as cutting salaries could lead to the loss of crucial talent, and GPUs lose value regardless of their output.
The Cost of Intelligence
Despite the rising costs, AI companies charge users based on the number of tokens processed. However, as models become more complex, they require exponentially more tokens to perform tasks. OpenAI reported a 320-fold increase in token usage in 2025. While the cost per token is falling rapidly, the overall cost of intelligence is rising.
For users like the author of the article, who frequently interacts with AI models, the number of tokens used has increased tenfold in a year. This trend is not unique to one user; it’s a growing industry-wide issue.
Competition from Cheaper Alternatives
Chinese AI models are operating at one-sixth to one-quarter the cost of comparable U.S. systems. The performance gap between open-source and proprietary models has narrowed significantly, from 8% to 1.7% in a year. DeepSeek, a Chinese AI company, offers its API at roughly 90% below OpenAI’s prices.
These cheaper alternatives are capturing the market of users that the frontier companies cannot serve profitably. A disruptive entrant without the high salary costs of OpenAI can offer the same capabilities at a fraction of the price and still make a profit. Frontier companies, on the other hand, are left with the most demanding and costly users.
The recent retirement of OpenAI’s GPT-4o model illustrates this dilemma. Despite user backlash, the company moved forward, but only 0.1% of its 800 million users were still choosing the older model. This small but vocal group highlighted a key point: some users prefer a chatbot that feels like a friend over the latest and most capable model.
According to a RAND report, these cheaper alternatives are taking the customers that frontier companies could never serve profitably. The lag between the frontier and commodity models is measured in months, not years, as predicted by innovation theory.
The Future of AI Competition
The frontier companies’ best defense is inertia, as switching costs can delay disruption. However, as open-source alternatives improve, these switching costs are decreasing, and disruption is inevitable. The author of the article notes that while intelligence may not have a ceiling, the cost structures and market dynamics make the theory of disruption applicable here.
The competition is intensifying, with companies like Notion reporting that AI costs are eating into their profit margins. Microsoft’s GitHub Copilot is reportedly losing more than $20 per user per month while charging only $10. Even basic AI features are proving to be a financial burden for some companies.
As the AI landscape evolves, the challenge for frontier companies will be to maintain their edge while managing costs. The next few years will be critical, as the balance between innovation, cost, and market demand continues to shift rapidly.
Comments
No comments yet
Be the first to share your thoughts