Summary: AI commoditizes intelligence, forcing profits to migrate to asset-heavy Cybernetic Rollups that own context at the edge.

I started this blog five months ago to combat a venture narrative I’m now convinced is wrong.

The narrative: That AI’s winners will be middleware SaaS companies––agentic platforms, cognition resellers, outcomes-pricing vendors––selling intelligence to businesses that own outcomes. Effectively installing brains in another’s body. It’s been posed by venture’s best

and funded accordingly.

The narrative is popular and structurally flawed.

I developed this argument in blogs drawing from public markets signals and principles of economics, physics and philosophy:

Microeconomics. The Smart Squeeze: Application-layer margins collapse as models improve and token demand explodes.

Pricing. The price of outcomes: App-layer outcome pricing fails because of fatal incentive misalignments.

Fidelity. Illiquid Dark Matter: Portable context rots and loses fidelity when extracted; maximal value lives in signals only available at the origin node.

Physics. The Bitter Lesson and Hayek’s Revenge: Energy, latency, and willingness-to-pay force intelligence to the edge.

Public Markets. Hard Mode: SaaS vanishes when Claude can generate its own app in 30 hours; only vertically integrated atoms remain.

Liability. Autonomous Commerce: Automation without owning outcomes is theater.

My time building Crosshatch––a “Plaid for Personalized AI”––turned out to be an empirical test of these principles. We raised money and shipped product to test: could a generous but foreign promise of “hyperpersonalization anywhere” overcome market incentives and physics?

It could not.

Instead, the optimal commercial path is a Cybernetic Rollup: own the physical nodes where context is born, deploy intelligence to the edge to program it, and convert capital intensity into a data flywheel no software company can match.

But why is this moment any different than previous eras of business, and what induces this Cybernetic Rollup as a final state?

A COGS in the machine Link to heading

Why is this time different?

SaaS has been around for a long time. What makes this era of AI SaaS different than previous eras?

The difference is pricing.

In previous iterations of SaaS, businesses sold seats or firm licenses. The value of the offering was fuzzy, denominated in purchase orders and requisition requests. The mapping from cost to business value was never (easily) modeled – the value was assumed on the reputations of the product or engineering leaders who requested the software.

In this new era software costs are denominated in outcomes. They are line items that roll up directly to the CFO.

“The average customer service resolution cost us $X? How much did it drive in incremental revenue?”

is a query Claude Code could readily answer.

You mentioned another vendor is offering outcomes for less? Could we go with them? Aren’t they also guaranteed?

a CFO might continue.

New AI SaaS is a literal cog in the machine. As I showed in The Smart Squeeze, taking intelligence sold to the limit (an apparent dream of most app-layer investors!!), SaaS margin is bound from above by the marginal value of the wrapper at the level of intelligence consumption. Is the value software delivers at 10M tokens as high as it was at 100,000? Surely not.

Claude Code users are huge consumers of tokens making it a good empirical measure of an app at this limit. In this empirical limit case, the marginal value of the wrapper is currently priced by the market at zero. Claude Code itself is free while OpenAI’s Codex is open sourced under Apache 2.0.

The asymptotic case is already observable and important to note given that’s where venture narratives want app layer token consumption to go:

The trillion dollar opportunity in enterprise software is AI Agents

said Box’s Aaron Levie in July. In the asymptotic case, AI models are expressly counterpositioned

with a business model software can’t mimic because it’d damage the business model

against software. The only paths for the app layer are to host its own models or not sell very much intelligence (be “Dumb” as I put it in July).

The former appears to be an effective path. Cursor is applying it. It’s consistent with “Hayek’s Revenge” – where intelligence is specialized to the paritcular circumstances ‘of time and place’ of the app.

Selling outcomes over reselling cognition doesn’t help the app layer either. As I illustrated in The price of outcomes, vendor incentives are doubly misaligned with customers: vendors want to grow ACV and use visibility into their customers’ context to improve their service (likely to serve their customers’ competitors!), while customers want neither!

So if

  • microeconomics flattens cognition resellers and blocks their outcomes pricing
  • physics and economics make context extraction ineffective and irrational
  • automation without liability is just theater

what is the strategy?

The incentive misalignment of pricing rather reveals the competitive value of access to context as a final moat. As I’ll argue next, and worse for the venture SaaS narratives, context cannot be intermediated.

“I had worked hard for nearly two years, for the sole purpose of infusing life into an inanimate body. For this I had deprived myself of rest and health. I had desired it with an ardour that far exceeded moderation; but now that I had finished, the beauty of the dream vanished, and breathless horror and disgust filled my heart.”

-Frankenstein

Frankenstein is what happens when you install intelligence into a body you don’t own.

The lesson for AI SaaS is clear.

The Coordination Constraint Link to heading

The entire AI venture narrative––“sell work not services!” and really the grand trajectory of Silicon Valley itself––rests on the assumption that you can intermediate complex coordination. That you can extract the “thinking” from a business and sell it back as a service, leaving the “doing” (trucks, warehouses, liability) to someone else.

The AI era is proving this assumption to be false. Ronald Coase’s 1937 Nature of the Firm returns with a vengeance: Coordination is bound from above by the legibility of context.

But context doesn’t want to be legible. The laws of physics and economics fight against its legibility

  • context rots when it’s extracted
  • revealing context leaks expensive alpha and allows price discrimination
  • latency and willingness to pay push compute to where context is born

This is a problem because

  • you cannot coordinate what you do not fully know
  • you cannot fully know what you do not own
  • you cannot own what you don’t bear liability for

This makes middleware structurally poor-sighted, expensive, and misaligned with customers’ and vulnerable to labs’ incentives.

The Law of Profit Migration Link to heading

We’re entering the final stage of Aggregators, but aggregators will take a new form.

Formally, aggregators, following Stratechery, have

  • direct relationships with users
  • zero marginal cost for serving users
  • demand-driven multi-sided networks with decreasing acquisition costs

Aggregation Theory fell out of Clay Christensen’s Law of Conservation of Attractive Profits:

The law states that when modularity and commoditization cause attractive profits to disappear at one stage in the value chain, the opportunity to earn attractive profits with proprietary products will usually emerge at an adjacent stage.

This shift of modularity describes technology’s biggest platform shifts.

Modularization of property, cars and content
Stratechery's decade-old illustration of the Law of Conservation of Modularity, modularizing property, cars, and content.

This age of intelligence introduces a meta modularization that inverts the past patterns of integration.

The Final Aggregator Link to heading

When a layer is modularized, profits migrate to an adjacent stage. In the AI era, intelligence is being modularized. AI is an ‘anything-to-anything’ converter. With intelligence modularized, profits migrate to trust and demand aggregation.

AI modularization
AI inverts classical modularization, modularizing intelligence while integrating trust (owned liability) and distribution of customers and Dark Matter context.

Put another way, since

  • economic coordination is bound by context legibility
  • complete context legibility is only really available for assets you own
  • you can’t automate what you don’t bear liability for

we end up with high-profit aggregators that modularize process and judgement while integrating trust and customer relationships.

It is an inversion of past aggregation because in the past, platforms modularized supply without operating it. Airbnb doesn’t clean the sheets. Uber doesn’t drive the car. But to deliver autonomy (and capture rents from doing so), you must assume operational control of the assets.

You can see early signs of this emerging in Autonomous Commerce, where autonomous vehicle services own the cars and all data in their fleet. No data is shared with base vehicle OEMs. Even when Avis serves as Waymo’s fleet management partner or Lyft does the same in Nashville, Waymo still operates “owned-and-operated Waymo One service”. That is, even with third-party-managed fleets, Waymo is still the operator responsible for passenger safety, insurance, compliance and liability. Fleet partners are contracted service providers who may own the assets but not the operations or the context.

But in white-collar labor, AI models can’t deliver guaranteed outcomes. AI cannot operate with liability. Unlike Waymo One, AI doesn’t come with vertically integrated intelligent sensor and actuator kits that confer rights to context streams and the associated commercial surplus.

Coase’s Revenge: The Market vs The Firm

I previously argued in Outcomes Protocol that prediction markets could commodify trusted AI application.

From this vantage point, however, it seems like the AI-rollup has the winning incentive structure over a decentralized form. Markets form downstream of signal, and as mechanism design theory suggests, information monopolists extract maximum rent by limiting access to context, not democratizing it.

The Cybernetic Rollup has no incentive to leak its legibilized Dark Matter to a public protocol, as doing so would commoditize the uncertainty it positioned itself to arbitrage. To capture the liability and context aggregation premium, you have to internalize the context and the risk.

So if you want to pull together the surplus from AI, you need to build companies in a new way.

The cybernetic rollup is the only entity that can satisfy Christensen’s law and the Coordination Constraint simultaneously: it owns the un-modularizable stage of trust, liability, and dark matter context, and converts capital intensity into a data flywheel that no software company can match.

The Cybernetic Rollup Link to heading

If you can’t install a brain into a foreign body, you must assume the body yourself.

In October Bryne Hobart at The Diff posed how AI––as an “intelligent switchboard” could take Aggregation Theory to its limits and deliver more efficient commerce:

This intelligent switchboard turns aggregated workflow intent into a dynamic economic inefficiency index that is programmatically extensible/legible to the growing universe of first party and third party, increasingly AI-powered applications purpose built to solve specific problems end-to-end. Crucially, this system will be maximally conducive to self-improvement via RL: … it optimizes around which tool was most effective at turning workflow intent/problems into solutions. Outcome signals drive better routing, which attracts better tools, which improves outcomes in a compounding loop.

Hobart completed this observation in November, noting that value for this switchboard will accrue to the vertically integrated sensor rollup.

Hobart argues that combining

  • owned sensors that legibilize Dark Matter––signals of the supply and demand of everything
  • general intelligence to process these signals
  • compute to power the intelligence

and, most importantly, vertically integrating all three so that it can use success in each area to subsidize limiting factors at any moment.

This is what will allow [the AGI winner] to execute across these axes of AGI without relying on elements outside of their control—and that certainty means that they can put a lower discount rate on investments that further entrench them. The alternative is to be an economic captive of whoever controls whatever the critical complementary product happens to be.

he concludes.

I largely agree with Hobart, though I believe Hobart is too centralization-pilled. Hayek’s famous “Use of Knowledge in Society” cleanly implies cases where intelligence is best deployed at the edge. You don’t have to be a hyper-scaler to own your own compute destiny.

This vertically integrated sensor rollup is clearly a Cybernetic Rollup––combining an array of sensors and owned or contracted actuator with a unified intelligence performing commercial matching and taking a coordination fee.

Cybernetic Loop --> Cybernetic Rollup
A Cybernetic Rollup. First image from [Wikipedia](https://en.wikipedia.org/wiki/Cybernetics).

This structure unlocks the Cybernetic Arbitrage.

Cybernetic Arbitrage Link to heading

This strategy collapses the traditional distinction between Private Equity and Deep Tech––it is asset-agnostic.

There’s no meaningful distinction between buying a legacy business to “install sensors” (e.g., PE approach of modernizing a data stack) and building autonomous systems from scratch (Deep Tech approach). The goal is the same: acquire the physical nodes where context is born. Build what doesn’t exist; buy what does. In both cases, you capture the arbitrage between the market’s valuation of operations (low-margin labor or capital-intensive hardware) and the actual value of the context (commerce!).

The Cybernetic Rollup then prioritizes assets that, within a vertical,

  1. legibilize high-entropy or heterogenous Dark Matter
  2. and provide net-new access to actuators

reflecting its two sources of revenue: (1) risk margin and (2) operating margin. The rollup targets assets whose context or actuation access cannot easily be disintermediated or competed away. Unlike the Outcome Pricing vendors trying to squeeze margin from a service they don’t control, the Cybernetic Rollup doesn’t need a new pricing model. It captures the spread between the market’s perceived cost of operations and the actual efficiency of a situationally aware AI-governed asset.

Jeremy Giffon’s October tweet puts the conclusion well

We’re past the infinite-gross-margin era, which means scale is now the only way to make real money. software used to be about selling strings; now it’s about renting compute. In ten years there will be far more multi-trillion-dollar companies earning 5% net margins across mind-boggling scale than there will be firms still riding the 75% gross margins of the saas decade.

The labs are racing to build a superintelligence that’s brilliant, centralized, yet trapped in the data center. Their customers’ shareholders want ‘intelligence too cheap to meter.’ The winners of this era will build railroads starting from the applications’ edge: situated, aware, capital-intensive networks that make the physical world programmable.

Distinct from Hobart, these railroads might not need the maximal compute to do their jobs, but rather situationally-aware compute––compute purpose built for the context of the context.

The grand path isn’t to sell or resell intelligence; it’s to optimize the conversion of energy and capital to intelligence.

And to do that, you need to build or buy the assets that uniquely generate it.