Judging by public markets, the future of software looks bleak. Memes abound.
Even optimistic software investors appear wary.
Upfront’s Peter Zakin ’earnestly love[s] software startups.’ But even he accepts Agents Will Climb The Ladder forcing startups either up the ladder with agent capability or down to infrastructure agents use.
Following his bet on Daytona (“agent sandboxes”), Peter seems more bullish on infra for agents. Vercel’s Guillermo Rauch agrees, encouraging builders to “focus on the API” and “do it for the agents.” These are fine instincts but they feel conciliatory –– “AI will run the world so make nice infra for it and hopefully the AI sun will shine upon you.”
The ‘infrastructure for agents’ play faces real headwinds. My sense is that these are fundamentally bets on the existence of a long-tail of software or agent-providers; or persistence of Silicon Valley handshake-deals. Venture dollars though are consolidating into fewer deals, making it hard to see this scaled long-tail future.
Labs want to vertically integrate. They want to make most Claude-useful software free, to lock in a subscription to Claude. Claude probably tends to work better with tools Anthropic controls. The Claude Code harness itself is free. Running OpenClaw with a Kimi subscription is free. Foundation Cap’s Jaya Gupta’s “trillion-dollar” Context Graph didn’t last more than a month until it was leaked that Anthropic was building the same functionality into Claude, likely to be offered for free.
Clever infra for agent companies appear to understand their relative positioning––@ashtom’s Entire (“narrative version control”) launched out the gate as “rebels”. Labs would prefer to own The New Github to lock users into their AI; a portable open CLI tool that works with any agent is rebellious. But as Benedict Evans puts it bluntly
“No-one wants to give up customer ownership and become somebody else’s dumb API.”
AI is counter positioned against software Link to heading
For those who don’t remember counter-positioning comes from Helmer’s 7 Powers and is
the adoption of a novel, superior business model that incumbents can’t replicate due to the anticipated cannibalization of their existing business
It’s everyone’s favorite power because it’s so dramatic. A business model a competitor can’t replicate without screwing themselves. Look at this violence!
AI is clearly counter-positioned against software. Software that wants to re-sell cognition pays retail like everyone else; they face worse COGS than the labs. They will get squeezed––see my prior microeconomic derivation showing this. App layer outcomes pricing will make things worse. Labs will offer software for free to lock people into their cognition. Software businesses don’t readily compete with free.
The labs want to own the interface into intelligence and accrue 90% margins for the trouble. If Claude Code or Codex really make software cheap to make, why shouldn’t the labs make loss-leader software to enrich their inference business?
So if we can’t go up or down the ladder, maybe we shouldn’t bother with the ladder at all.
The labs are counter positioned against software startups. It’s time startups counter-position back.
The Blockbuster of AI Link to heading
Counter-positioning’s most famous story is that of Blockbuster v Netflix. @lefttailguy tells the history well.
Netflix executed counter-positioning against many competitors, but their first version––DVD mailorder making Blockbuster’s physical retail footprint deadweight––is the most famous.
This first form is also a playbook for startups and VC against tens of billions deployed in the labs.
Blockbuster invested heavily in a retail footprint that caught them flatfooted at the distribution of mailorder DVDs and eventually streaming. Labs are investing heavily in inference infra toward supporting an API business selling lots of tokens.
what do you think happens when the world realizes AI scaling laws were revenue scaling laws all along?
asked early-Anthropic investor Anjney Midha.
Just like Blockbuster believed more retail footprint meant more sales, the labs and their investors appear to believe more tokens is more good. McKinsey bragged they spent 100 billion tokens with OpenAI.

Does anyone know what this helped their clients ship?
Will Manidis observed a similar phenomenon last week in Tool Shaped Objects
AI is everywhere in consumption and almost nowhere in output … almost no one has stopped to ask whether the relationship between tokens consumed and value produced is a line, a curve, or a cloud.
How can this be? How can we be consuming so many tokens yet the upshot of this appears almost nowhere?
Netflix rejected Blockbuster’s distribution––the retail store. They offered a superior abstraction: just choose the movies you want to watch and you can. Blockbuster’s late fees made the customer pay for a Blockbuster efficiency problem. Token pricing makes customers pay for an equivalent lab problem.
The labs’ distribution model is the token.
The Token Counter Link to heading
The counter is to reject the token entirely and sell its abstraction.
Sell an intelligence-employing product. Sell an outcome.
Say an outcome is a trusted, completed job where the vendor takes responsibility for the result and optimizes its costs––like token burn––to maximize margins.
Labs want you to token-maxx. You must want to token-minn––and change the denomination of product sold entirely.
Labs will not quickly switch to outcomes pricing. Anthropic (and Nvidia!) benefits from a fragmented application of Claude. They like that everyone is using Claude Code to build the same software. But this is obviously not efficient.
Stylistically, the lab and counterpositioning profit maximization equations look like
- Labs: Profit = Volume x (Price per Token - Cost of Compute)
- Counter-position: Profit = Value of Outcome or Product - (Volume x Price per Token) - Cost of risk
I find these equations pretty shocking. Irrespective of the value of tokens, labs want to maximize their volume. Counter-positioned outcomes sellers want to do exactly the opposite: they want to minimize this subtracted cost: volume and price per token.
This is the making of a great counter-position!
Before I get too excited, I should note that selling outcomes is typically challenging because of misaligned incentives between seller and buyer. These issues usually arrive in B2B2B or B2B2C businesses like Harvey or Sierra. Harvey’s customers want to maximize profits, not enabling Harvey-revenue participation. For instance, if it turns out Harvey does the work of an associate, the law firm would prefer to send any surplus cost savings to the partnership, not Harvey. Harvey doesn’t actually know the value Harvey brings to their customers’ end user, and their customers like it this way. Similarly, Sierra customers want to maximize margins, not grow the price paid to Sierra. Cost of customer service should be minimized, particularly as SaaS outcomes begin to commodify.
The only plausible defense and path for the counter-positioner is one of trust: take on a liability-bearing job that customers come to love and trust or are too afraid to rip and replace. This can mean absorbing a risk on your customer’s behalf, aligning your revenue directly to theirs, or embedding infrastructure that profits from efficiency you create on their behalf. Products like
- Waymo’s insured driverless rides
- Sardine’s Chargeback Guarantee
- Shopify making money when its customers make money
- Thrive’s accounting rollup Crete
- Base Power’s grid-optimizing home battery
are all great examples. They deliver a net-new trust-bearing offering or one where defection creates interruption risk. They specifically optimize for token efficiency and customer-value.
To Will’s concluding Tool-Shaped Objects question last week
Ask what the number is before making it go up
outcomes sellers –– as you can see directly from their profit maximization equation –– are specifically making the value of their outcome go up, not a number without clear connection of value to anyone. They’re able to do this because they have a direct relationship with their end user, unlike today’s B2B2B app layer sellers.
Irrespective of their value, labs are incentivized to make token burn go up. Their business thrives on Tool-Shaped Objects. On delivering more Blockbuster stores. On increasing late fees.
Blockbuster did eventually copy Netflix, but it’s just very hard corporate politics to champion an idea that will both lose money on its own and kill the pricing power of the company’s existing cash cow business.
explained The Diff. Dario is betting the farm on tokens. He believes the large labs will end up in a Cournot equilibrium. That they’ll end up looking like oil producers, with margins protected by competitors’ rational limits on supply.
But incredible open models launch near the frontier every week, and labs’ past models pose competition to any Final Model for applications not needing a Nation of PhDs. Cournot works when the number of producers is fixed, producing largely undifferentiated products. Open models make this impossible. Intelligence may well be too cheap to meter.
The labs are building Blockbuster stores and handing out plaques for getting the most late fees. The opportunity is to reject token-revenue scaling laws. Stop counting tokens.
Deliver outcomes to customers and do it at scale before the labs figure out they should too.