Summary: Software modularized nouns; AI modularizes verbs, migrating profit from enabling middleware to asset owners bearing structurally non-deferrable liability. CapEx is the new CAC.

Last month Anthropic published a blog highlighting their new Tool Search Tool.

The key insight is that registering all tools with Claude directly through MCP consumes too many tokens. A superior approach instead is to give Claude the capability to search a tool registry to find just the tools it might need for a given task. This pairs well with the multi-agent pattern Claude Code appears to use, where Claude can deploy a subagent to perform well-formed tasks and return with compressed responses to a lead agent.

Taking this to the limit, suppose there’s a registry of all available services with well-posed tool definitions. Slick auth, payments, authorizations, etc., have all been worked out.

What would stop Claude in a harness from doing … literally anything?

  • Unreliability?
  • Poor search performance?

Not to trivialize these problems, but they seem a bit like cope. For any with important enough benefits, each seem like they’ve available workarounds.

So return to the limit case: suppose Claude can use all tools. It can pull in information and modify state as enabled by these tools.

What then is a business? How does value accrue?

Or as Alex Danco put it in 2017

What happens when friction goes away? Capitalism, at its core, is fairly straightforward: create shareholder value by providing customers with access to something scarce.

If any state can be observed, changed, transferred or authorized with a command to Claude, what exactly are we all getting paid to do?

What’s scarce here?

Anything that’s a prompt away isn’t scarce. The plays selling facilitation––simple matching leaving trusted coordination to others––have all been done.

What’s scarce is the liability for the consequences of complex coordination.

The mechanics of Cybernetic Rollups Link to heading

Last week I published Cybernetic Arbitrage. I studied how AI modularizing intelligence inverts the typical form of Christensen aggregators: they integrate trusted operations and distribution, while modularizing process––action itself. This is distinct from past aggregators that previously modularized nouns––stays, cars, content.

AI modularization
Applying Christensen's Law of Conservation of Modularity to AI's modularity and integration dyanamics from my last blog.

I realized this distinction is subtle, so I’ve written this section to make it clearer.

Software modularized nouns. AI modularizes verbs. Link to heading

I argued this shift violates favored Silicon Valley strategy, which has found great returns intermediating coordination –– usually in the form of taking rents facilitating matching:

  • Airbnb doesn’t own or operate the houses, it just connects you to them
  • Uber doesn’t drive the cars, it just matches you to them
  • Netflix didn’t (used to! 👀👀) make the content, it just distributes it to you

But the age of intelligence doesn’t intermediate coordination––AI is the coordinating layer itself!

Past cycles modularized nouns:

  • Airbnb: homes
  • Uber: cars
  • Netflix: content
  • Doordash: food

while integrating matching and distribution.

AI modularizes execution––the verbs themselves:

  • Waymo: driving
  • Zipline: delivering
  • Base Power: powering

while integrating the trusted operation––liability!––and distribution.

Aggregation’s theory inversion is this: formerly, modularized matching was a deferrable handoff. DoorDash doesn’t deliver; a driver does, partially absorbing real-time risk. Platforms resolved failures with refunds and insurance.

AI removes that buffer because it modularizes the execution itself. The human buffer is gone. When a Zipline drone executes “deliver,” liability is non-deferrable––it attaches the instant the verb is performed. For low-stakes errors, a refund might suffice. Clearly Sunday Robotics’ Memo has to eventually work.

But when AI is

  • driving a car
  • taking on a legal case
  • putting away expensive dishes
  • or selling a high-value product

there is nowhere for that liability to escape to.

These are the springs of defensibility: liability that cannot be delegated is an economic moat.

You cannot modularize a verb without owning the noun because the verb’s value is inseparable from real-time context, and operators hoard that earned context as proprietary alpha. This is why profits are migrating to asset owners with distribution to context and actuators.

The verbs that matter most––driving, delivering, powering––cannot be separated from the nouns that generate their context.

There’s no delegational space; you must assume the body to sell the action.

The vanishing space between verbs and context Link to heading

This creates a problem for Silicon Valley’s would-be coordination-intermediators, namely the enabling AI application layer. Most VCs appear to believe that the application layer will integrate trust and intelligence and participate in that surplus.

As is, however, structurally they will not. This is for two reasons.

  1. No Liability Surface: B2B SaaS has no relationship with the end-user. When an end-user uses a product, the liability buffer is the brand’s guarantee—not any third party SaaS provider’s. The buck stops with the owned-and-operated service. You cannot delegate blame to a vendor.

  2. Incentive misalignment. Guaranteeing outcomes requires “Dark Matter” context transparency, but customers won’t share proprietary alpha with a vendor who could resell it to competitors. The app layer is structurally leashed: it needs the very data that venture forces it to use to commoditize its customers.

This confirms my application of Christensen’s theory: when intelligence (the enabling layer) commoditizes, Trusted Distribution (the doing layer) integrates.

From every vantage point, the B2B AI app layer is structurally misaligned with the coming commodification of intelligence and a new-forming integration of trust and distribution.

The AI app layer is stuck “enabling” in a world that actually prizes “doing”.

You must just do things Link to heading

AI creates leverage by turning context into efficient action, but the B2B AI app layer is blocked from capturing the surplus. It cannot bear liability for the verbs it executes nor capture high-value proprietary context. If the app layer does happen to capture context, there’s a reason: that context corresponds to a commodity activity––capping ACVs.

When intelligence transitions from enabling the work to doing––and taking responsibility for!––the work, the distinction between software and the physical act collapses. An AI agent cannot “coordinate” or “enable” a vehicle’s driving––it actually drives it! In the self-driving case, the result is not a refund but a crash. In the legal case, an AI lawyer’s mistake could cause irreparable harm.

You cannot bear operational responsibility for an asset of which you are not an owner-operator.

This is an uncomfortable conclusion for the software industry: The era of high-margin low-liability “software as a service” is ending.

If you want to capture value in AI, you cannot sell an enabling brain, you must sell the embodied labor. This is not the Sarah Tavel-variety “sell work, not software”. That is, vertical integration in every category

  • logistics
  • legal services
  • health care

drives higher returns but with up-front capital requirements VC is going to have to get comfortable underwriting.

The distribution costs for software are about to explode. Building Something People Want doesn’t work in a world where non-technical people can just ask Claude to build it. Claude Code’s lead author Boris Cherny didn’t open an IDE once last month. Non-technical lawyers like Jamie Tso at top-15 by-revenue Clifford Chance are using Gemini to infer product specs from Harvey and Legora websites and using them to have Gemini create sufficient replicas.

Venture became addicted to the reliable monthly revenues of SaaS. But AI is screwing up this play in every regard

  • increasing competition (even from SaaS’ own customers!)
  • reducing information access
  • reducing value capture capabilities

Rohit Krishnan wrote this morning,

There’s a highly underrated LLM labour theory of value, where you only find value from LLMs if you put in the labour, and the more you put in the more value you find.

Profitmaxxing in the AI era happens by contextmaxxing the assets you own and AI-maxxing their operation. When intelligence is abundant, scarcity migrates to trusted operations––the vertically integrated sensor rollup––the Cybernetic Rollup.

When Claude (or the latest cheapest open-source model) can do all the things, all that’s left is to originate and capture more context or actuators––and AI- and liability-maxx.

The age of enabling is over. It is time to do.

Thanks to Ivan Vendrov for inspiring this post.