This technology cycle has been particularly interested in economist Ronald Coase.
He’s been in the discourse for his near century-old writing on the Nature of the Firm, answering why firms exist and why they grow. His view: Firms exist to minimize transaction costs––firms grow larger whenever hiring is cheaper than transacting and vice versa.
His later writing on monopolies and durability has gone without much attention––it’s the subject of this article.
While some investors––recently D1’s Dan Sundheim––believe lab economics are settled, I’m of the opposite opinion. Two weeks ago I wrote that the labs resemble Blockbuster––not Netflix as Dan suggests. Labs optimize for token burn––something without clear value to end-users––and not value provided to end-users, something Netflix did to tremendous success.
So this week I take up the point directly. I attempt to provide a setting––Coase’s setting––where the AI lab is best positioned to accrue durable value. Forget oligopolies and Cournot equilibria––suppose there’s only Claude. Suppose Anthropic has a monopoly, giving it the best possible shot to absorb high margin inference spend.
In a most favorable case where an AI lab––let’s take Anthropic for notational convenience––has no competition, would Anthropic get to charge monopoly prices? Would it own a large market?
Over the weekend I collaborated with Claude Code on this question. The rest of this article summarizes what we found, but you can read the paper with its complete technical results here.
The Coase Conjecture Link to heading
In 1972 Coase posed a simple question: If a monopolist owns all the land––assumed to be homogenous in kind and quality––in the world, at what price does he sell it?
Well he’s a monopolist––at whatever price he wants!
This seems like the answer most lab investors would give.
Coase’s argument is interesting and simple. Normally a monopolist would set quantity sold where marginal revenue equals marginal cost. For convenience, let’s say marginal cost is zero.
Once the monopolist land owner has sold a bit of land, he sees the remaining land is also still available, but not monetized. Maybe he should sell a bit more––it’d generate pure profit! To do that, however, he’d have to lower the price to meet demand at the price it’s willing to pay.
Doing this annoys the original buyers.
The land is now worth less than what they paid. Eventually, however, the market catches on. Candidate buyers know the monopolist can’t resist selling more land (marginal cost of selling is zero!) and so they wait.
While the monopolist technically has no competitors, he ends up with one he didn’t expect––his future self. In situations like this, the market can guess a monopolist’s future behavior, so it holds out waiting for the “future self” monopolist to depress his own prices.
AI models aren’t land Link to heading
Of course, AI models are not land.
When a lab sells models, they do not sell the weights, but rather an instance of inference for a given set of inputs. Ostensibly labs aren’t selling durable goods.
On the other hand, if you suppose that people are buying inference to the ends of creating durable goods (that themselves have zero marginal cost to, in turn, sell)––software, fine-tuned models, RL environments, etc.,––durability returns.
It’s true that applications––like customer service, or fraud detection––could require inference at the time of service, but I assume this away supposing that all inference is used to create durable goods that run with negligible marginal cost.
This doesn’t seem controversial; for instance in 2024 Sierra’s Brett Taylor shared they use a composition of models to provide Sierra’s AI customer service. It’s unlikely a t-shirt return running on restrictive scaffolding needs Claude Opus 4.6––it can probably use near-zero marginal cost dumber models. In this way, I suppose Sierra AI, likely built with AI coding tools, is itself a durable good serving up a consumable product at near zero marginal cost.
So let’s take this as given––labs aren’t selling tokens but rather durable goods that sequences of prompts (with harnesses) create. Let’s also suppose that all models in any customer’s consideration are capable of producing a durable good consistent with any relevant intention. Frontier models may require less prompting, while less capable models might require more. Models are competing on price, not absolute capability.
Of course, in cases where frontier models have capabilities absolutely unmatched by open-source, monopolist power reigns. But that doesn’t seem like it’s happening: open source is on average about 6 months behind the frontier, a gap that has closed quickly over the past two years. This assumption allows us to evaluate the market structure when users have a choice––increasingly it feels like they do. We consistently hear about model overhang of past models––let’s suppose that overhang is real and addressed by open-source laggards.
At first glance it could appear that Coase implies that frontier labs can’t sustain monopoly prices because they can’t resist selling more and more inference at what end up being lower prices. This, of course, is incomplete in that every inference customer can choose to buy inference from cheaper open source models. It turns out the existence of open-source alternatives protects the monopolist’s pricing power by giving customers a reason to exit the frontier market rather than wait for discounts.
Coase fails with Outside Options Link to heading
In 2014 UCLA economists Simon Board and Marek Pycia published a challenge to Coase Conjecture in the prestigious American Economic Review. In cases where buyers have an Outside Option––where they can defect from the monopolist’s market and buy some alternative––the Coasian monopolist unraveling doesn’t happen. The monopolist can sustain the monopoly price indefinitely.
Empirically, this appears to be happening in the inference market. The labs––or in our case, “monopolist Anthropic”––face open-source Outside Options. While labs continue to post model improvements and price reductions, near-frontier models like GLM 4.7 are a tenth the price of Opus. Yet Opus appears unaffected. Will Dario, consistent with Sundheim, just end up charging 90 percent margins?
Intuition for the result will help. In the original Coase setting, the monopolist’s problem is that buyers who don’t want to pay today wait. The monopolist knows they’re waiting and eventually caves. But with an outside option, buyers don’t have to wait. They defect to the outside option, removing the price pressure. The buyers who remain in the market are those who are willing to pay monopoly prices.
Initially we might imagine that the existence of open source would add price pressures to the monopolist. It does affect the monopolist: some candidate buyers exit the market.
But it also solves the monopolist’s commitment problem. Effectively, the outside option is a self-selection device that relieves the monopolist from price-sensitive waiters who’d pressure prices downward over time. The monopolist loses some customers but gets to keep pricing power. This is broadly what we see today––Board and Pycia say this is an equilibrium.
There are clear extensions to this setting in inference markets. Suppose you’re considering developing new software using AI: for you, waiting for Anthropic to lower prices could prove costly. A competitor who pays full price today could lock in customers before you enter the market. This dynamic is likely what explains today’s inference market structure: buyers would prefer to pay full price or defect to Minimax M2.5 or GLM 4.7 today than wait and let competitors eat their lunch.
The other extension, of course, is that Outside Options keep getting better. Open source models are improving every quarter: A buyer who defects today to a mediocre alternative might have waited for a better one in a quarter––returning us to the original Coase setting.
Frontier model durability Link to heading
Where this is going should seem pretty obvious.
The market structure is downstream of
- how costly is it for buyers to wait
- how fast open-source models improve
The monopolist is best positioned when it’s perceived as expensive to wait and open source models improve slowly. This is doubly favorable for the monopolist as waiting provides little upside––the outside alternative isn’t improving fast enough to be worth waiting for.
On the other hand, if monopoly buyers can and feel incented to wait––because open source is moving so quickly––Coase rules and monopoly pricing power unravels. The required speed of open source improvement to dismantle monopoly pricing power reduces to a function of frontier customers’ perceived cost of waiting.
This required speed is low. In the linked paper Claude (and I) show the critical growth rate is (1−δ)/δ, where δ is the buyer’s discount factor. For firms making quarterly decisions that discount future value by, say, 5% per quarter (e.g., as time preference, cost of capital, risk of competitor moving first), this works out to about 5% improvement in open-source per quarter.
Illusory impatience Link to heading
Today, many firms appear to believe the cost of waiting is very high.
Vertical AI companies have rushed to market, hoping that early traction (and VC king-making) could lock in critical leads, particularly for slow moving or high-trust applications like medicine or law.
It’s worth observing that this apparent perceived high-cost of waiting in the application layer may actually be illusory. For instance, in AI legal it’s appearing that “just using claude” may be more effective (and cheaper!) than buying from the application layer. So while initially monopolist customers may appear to be impatient––perceiving waiting could give up a first-mover advantage––durability revelations that happen later in app layer (monopolist customer) development cycles could lead buyers to become more patient, and able to wait, inducing the Coasian unraveling.
You can win on price but lose the market Link to heading
Suppose now that the monopolist wins on all counts: open source improvement is slow enough that buyers don’t bother to wait. Open-source capability might even plateau. The Board and Pycia result holds and the monopolist charges its optimal price at equilibrium.
Is our beloved monopolist now safe?
So far we’ve only discussed pricing power, but what about market capture? Even if the monopolist preserves its pricing power, it could be that so much of the market defects to the Outside Option that pricing power is practically irrelevant.
Consider the buyer’s problem. The inference buyer only pays the monopolist pricing premium if the frontier model offers enough additional value over the open source alternative to justify the price. When open source closes the gap it reduces the collection of buyers for whom the frontier premium is worth the price. These two dynamics compound: a shrinking price corresponding to a lower marginal benefit of frontier v Outside Option mixed with a shrinking customer base means the monopolist’s total revenue erodes faster than the capability gap closes.
Of course, this argument depends on inference buyers actually connecting their buying decisions to value actually delivered.
The market may not be doing this today––many preferring to build Tool Shaped Objects. In fairness, model capabilities are jagged and it’s a reasonable strategy for firms to keep buying frontier, irrespective of underlying value proposition while the technology matures. On the other hand, as the technology matures and firms begin to connect their inference consumption to value delivered, demand shifts from “just buy the best” to “maximize margins” or “buy what’s worth paying for." In this world, the monopolist’s value proposition reduces to its incremental value over the Outside Option. And that shrinks even as open-source improves.
What are 90% margins worth anyway? Link to heading
Anthropic isn’t Netflix. Netflix competed for attention, not as input to another margin hungry business process.
He may well be right on margins, but wrong about what it implies. The Board and Pycia result show that a frontier lab could very well maintain monopoly pricing even in the case of open source competition. Their margins could very well be 90%! Of course, high margins on a shrinking revenue base aren’t much to celebrate.
Extending Board and Pycia Claude and I affirmed
- monopoly pricing power is threatened by fast open-source development
- even with monopoly pricing power preserved, revenue collapses as capability gap closes and the market matures
- the app layer’s perception of high internal competition may have artificially protected lab pricing power
- if you stop training models and let the capability gap close, monopolist revenue could collapse
And now removing our simplifying assumption, the frontier market is not a monopoly. It’s some form of oligopoly yet it’s likely not ultimately Cournot. The competition among frontier labs only strengthen these results.
There was a thesis when we first invested that APIs that are in the business of having other developers plug into your model would be commoditized. … It would be a race to the bottom. I think that debate is more or less irrelevant.
If you look at the underlying margins of these companies, they are not the margins that you see in a commodified industry. The gross margins are quite high.
Dan Sundheim explained to Patrick last week. He’s right that the commodity thesis was too simple. But “margins high –> business good” doesn’t follow either. Board and Pycia explain why margins are high: outside options remove the price sensitivity of buyers. High margins are an artifact of the Coase selection mechanism, not evidence of a durable business.
The labs clearly can and are charging high margins today. That’s not the question. It’s whether they will be charging high margins in three years.
If open source keeps closing the gap, the answer from Board and Pycia––and from Ronald Coase––is probably not.