Silicon Valley will not stop talking about outcome-based pricing.

Better Tomorrow Ventures’ Sheel Mohnot explained that SaaS isn’t dead––it’s really that “per-seat pricing is dead.”

I’ve strong conviction this thesis is exactly wrong, and in this blog I’ll explain why.

The Origins of Outcomes-Based Pricing Link to heading

Brett Taylor’s Sierra is the poster-child for outcomes-based pricing

Today, AI agents executing processes autonomously enable an entirely new pricing model, where you pay only when the software achieves specific, valuable outcomes.

Brett explains on Sequoia’s Training Data

For our median customer that typically means when the AI agent resolves the issue for the customer autonomously there’s a pre-negotiated rate for that and if we do have to escalate to a person it’s free. We do that just to align with the business model of our customers.

Benchmark’s Sarah Tavel likely started the trend with her August 2023 Sell work not services

When you sell work, the sales cycle is different, it’s priced relative to the cost of a human performing the work instead of as a productivity improver

Alex Rampell continues at A16Z’s LP Summit sizing the outcomes-based pricing opportunity relative to the human labor cost it could supplant.

The price of outcomes Link to heading

All of this narrative bluster glosses over how the heck outcomes prices form.

🤔 Do we expect Silicon Valley types to be good at pricing?

Bill Gurley

If there was a scale of financial sophistication between one and 10 and you would say a really smart person in New York is an 8.5, the average Silicon Valley person on financial literacy is a 2

would seem to think not. Equal Ventures’ Rick Zullow adding

most VCs have ZERO understanding/respect for even the most basic concepts in finance

isn’t optimistic either.

Are 2/10 finance folks going to be good at pricing?

For Sierra, outcomes prices are “pre-negotiated.” For Tavel and Rampell, it’s back-of-the-envelope sized relative to the human cost.

These approaches will not scale or will collapse.

Outcomes prices will not be negotiated Link to heading

There are two reasons why outcomes prices will not be negotiated:

  1. Downplaying (lying about!) outcome value is strictly profit maximizing
  2. Price negotiations don’t scale across outcomes types

It pays to lie (and that’s the problem!!) Link to heading

If you’re talking to a vendor about buying outcomes, would you tell them how much these outcomes are worth to you?

If you were honest and shared the true expected value –– for instance, you’ve estimated how much value a successful customer service request brings to your business on average –– then your service provider would be in a position to price the outcome at exactly what you told them. This would eat all your economic surplus from the engagement.

So why would you be honest?

Are legal sharks really telling Harvey how much time the software is saving them so that Harvey can upcharge them? Are app-layer customers now charity organizations for Silicon Valley startups?

The agent-outcome seller does not know how much an outcome is worth to you. That value is private information to your firm. The agent-outcome provider offers no incentive for honesty either––it only costs you more (in higher outcomes fees!) to be truthful.

Be honest and you’ll make less profit.

Formally, it’s in your –– the profit-maximizing firm’s –– interest to exploit the information asymmetry between the firm and the agent-outcome seller. The less the seller knows about how much an outcome is worth to you, the more money you can make.

The agent-outcome seller has no leverage: outcome values are likely to be mostly case-dependent. Even if the seller learns about outcome values in a given vertical, they’ve no way to force you to reveal values for your firm, and you have plenty of options as you’re

being pitched by 20 AI vendors every single day. And they all sound the same. They literally all sound the same.

Brett Taylor explained 5 months ago.

The market structure and information asymmetry both favor the buyer. Outcome value isn’t incentivized to make itself known, and if you try to force a buyer to reveal it, they have plenty of your competition to choose from.

Today’s outcomes prices are formed on friendly Silicon Valley handshakes. But AGI will obviously not scale on handshakes.

Outcomes should be variably priced Link to heading

Second, even if this information asymmetry problem were resolved, the truth is that the value of outcomes is actually dynamic. To the extent agent-outcome buyer and seller are truly interested in mutually aligned incentives, the price should actually be outcome and instance specific. That won’t soon be resolved on a human-negotiated (or even Crosby-negotiated!) order form.

What’s curious about Sierra’s strategy is that in their exposition of outcomes pricing they cite ad-tech as the principal case-study for outcome-based pricing

Evolution to Outcomes Pricing

It’s a curious (and revealing!) choice for two reasons:

  1. Ad-tech sells scarce attention; intelligence is too cheap to meter
  2. Ad-tech ad prices are formed via systematic auctions, not bespoke negotiations

Sierra’s conceit or (short-term) arb is that well-packaged intelligence will be scarce. It certainly is a surprising and confusing take for an OpenAI board member.

The AI-pilled will insist intelligence too cheap to meter will not be scarce, so if outcome pricing were to follow ad-tech fully, outcomes would be auctioned, prices collapsing to the cost of delivering intelligence and indemnifying cost of mistake.

This leads to the second problem: contrary to Rampell, the 2-ply cost comparison for AI pricing is not the human labor it replaces, but the AI labor that will next compete.

Robots will compete risk-free outcomes to zero Link to heading

Tavel and Rampell suggest that the size of the pie for candidate outcomes-sellers is human labor.

That may well be true for now, but as soon as you apply obvious second-order effects

So robots take our jobs then another robot competes to take that robot’s job

the candidate fruits from outcomes pricing immediately deflate. This is not a far-off second order effect either: the labs are already moving up the stack.

With labs offering generalized agent harnesses for AI for free––Claude Code and Codex only monetize compute!––and AI continuing to accelerate, it’s unclear what the value-capture opportunity is for VC-backed startups hoping to durably capture human-labor costs and exit in 5-9 years.

Human labor is not the benchmark. AI competition is!

Dumb money Link to heading

As I wrote in July, you want to do exactly the opposite of what monetization experts are saying.

You want Dumb Money: to sell things whose consumption is orthogonal to intelligence or production is inherently humanistic. These are

  • creative tools
  • services or protocols that sell trust
  • plays with unfathomably large entrance costs
  • services that monetize high-entropy context flows
  • sticky attention
  • commerce or payments
  • vertically integrated Hayekian revenge

For all of these, humorously, seat-based pricing may work very well. Software for creatives or consumers will likely keep seat-based pricing because humans using it is the point!

Where does value accrue when robots compete? Link to heading

I previously sketched Outcomes Protocol, a mechanism to

  • price AI work
  • optimally allocate intelligence based on AI risk.

The idea is simple:

spend more on intelligence up to the point that it no longer marginally saves you on the cost of mistakes

People (or AI!) will disagree about this risk, which is why it’s a market solution, not a result of bespoke negotiation with a single player.

Outcomes-pricing has risen to Silicon Valley meme status with no theoretical motivation.

SaaS companies selling outcomes are not going to exit to public markets in 5-9 years.

The price of outcomes-pricing is the commoditization of traditional SaaS itself.

Non-traditional SaaS, however, is another story.