I've been sitting with a feeling that's hard to shake:
We're in a business model interregnum, a kind of in-between era where the old ways of making money are decaying, and the new ways haven't fully crystallized yet.
AI is the main driver here. Not because it "automates jobs" in some cartoonish sense, but because it destroys the scarcity of process knowledge.
For the last couple decades, a huge amount of enterprise value has been built on things like:
We know this workflow and you don't.
We've codified this process into software and you haven't (and we've locked you into our system-of-record.
We remember how this is done and you can't.
SaaS, consulting, mid-market PE rollups - a lot of that game has been:
capture tacit process knowledge → freeze it into systems → charge rent on it.
(To say nothing of financial engineering.)
But, LLMs (and derivative tooling) make it possible-and even necessary-to do something different:
These "common knowledge pools" make process knowledge cheap and on-demand.
If a workflow has ever been documented, explained, or even semi-understood somewhere on the internet or in your internal systems, AI can increasingly:
reconstruct it
adapt it
integrate it into your stack
and make it accessible to non-experts
That doesn't immediately blow up every business, but it starts to compress margins in all the places where "we know how this is done" was the moat.
So yes, there's still plenty of money to be made retrofitting AI into existing companies and workflows. Short- to medium-term, "AI-ifying" the mid-market is a real opportunity.
But, if you extend the timeline out 20 years, I don't think "AI-powered SaaS" is the main story.
I think those will look like edge cases that happened to survive, not the center of gravity.
What Comes After Process Knowledge Is Commoditized?
If AI eats process knowledge, what's left as the primary source of value?
I think the answer is: net-new knowledge, generated via experiments in the real world.
In other words: the unit of production shifts from "the software product" to the experiment.
Not just academic experiments, but tightly coupled, economically motivated, AI-accelerated experiments:
in biology
in materials
in chemical engineering
in energy systems
in any domain where you can close the loop between bits and atoms
Imagine a research micro-factory as the new economic primitive-a small, vertically integrated setup that can:
generate hypotheses (with AI help)
run cheap, fast experiments (in a lab, a fab, a rig, a simulator)
collect high-quality data
update models and strategy; and either:
commercialize the results directly, or
sell the data / IP / capabilities onward
The core loop is:
capital → experiments → new data → new knowledge → new products and/or more powerful models → back to capital
AI (and any scaffolding around the weights) isn't a product per se; it's a core component powering the cognitive engine inside the factory.
A Concrete (Extreme) Example: The Bio Research Factory
To make this less abstract, imagine an extreme version in biology.
You have:
A "frontier" bio expert with deep domain taste
A micro wet lab
Automated small-batch peptide or compound manufacturing
Simulation pipelines and digital "clients" (virtual trials, in silico experiments)
AI systems helping design experiments, interpret results, and propose new directions
Possibly even a dedicated, cheap energy source (e.g., a small modular reactor down the line)
This person (or small team) can:
Rapidly generate and refine hypotheses
Run many small, focused experiments
Continuously compress the resulting data into better models
Escalate promising results into more formal trials or partnerships; and either:
spin out products (new peptides, therapies, materials), or
broker the resulting data and insight to larger players
Crucially, the scarce asset here is not the formal workflow.
It's the frontier data you generate (including dynamic tacit knowledge) that didn't previously exist.
That data improves your models (especially if/when there's a breakthrough that makes online learning "local" weight updates valuable, feasible, and viable), which improves your hypothesis quality, which improves your hit rate on valuable discoveries.
This dynamic is wildly different from traditional SaaS or consulting economics.
Why I Think Crypto / Tokenization Eventually Matters
If you believe in a world of thousands of these research micro-factories, a few things break in our current financial + IP tooling:
Data becomes an asset that needs clearer, composable ownership.
Risk capital needs to flow into tiny, high-risk, high-upside experiments.
Collaborators need to share in upside without being bolted into a single entity.
Provenance and permissioning of data + models starts to matter a lot.
This is where tokenization feel less like a buzzword and more like the implementation of necessary infrastructure.
A few possibilities:
Tokens that represent claims on specific data streams, research programs, and IP portfolios
Programmable instruments for "inverted insurance": instead of pooling capital to avoid downside you pool capital to create upside by funding frontier experiments
Decentralized registries for data provenance, experimental results, and weight updates
I don't have a fully worked-out design here. But the shape of the need feels clear:
We'll need new financial primitives to fund, own, and trade the outputs of thousands of autonomous research micro-factories.
That's where AI, crypto, and (eventually) new hardware/energy systems intersect.\
Bottlenecks in This New Regime
If this is even roughly right, then the real bottlenecks shift.
Instead of:
distribution
sales efficiency
CAC:LTV
How fast can we hire SDRs?
We get bottlenecks like:
1. Taste / Judgment / Question Selection
AI explodes the space of possible things to try.
Humans with good "research taste" decide what's actually worth trying.
That taste becomes a major source of edge.
2. Experimental Throughput
How fast and cheaply can you run safe, informative experiments?
microfluidics
small-batch manufacturing
robotics
simulation pipelines
This becomes a kind of new "compute" - but for the physical world.
3. Capital Allocation Mechanisms
We'll need:
new ways of funding many small, high-risk research entities
new ways of tracking performance and reputation
new ways of owning and trading slices of IP and data
This is where token-like instruments feel inevitable.
4. Data Infrastructure
You need:
clean pipelines from experiments → structured data → models
strong provenance and auditability
the ability to selectively share / sell / train on that data
Again: the value is in the new data, not the workflow snapshot; and, importantly, as the cost of software implementation (code generation, testing, and so on) continues to decline, it becomes more rational for economic agents to bring ~all software in-house, where they can have more control over last-mile design decisions and rapidly iterate in response to new data, which will always be relatively expensive for centralized SaaS providers.
So What About All the "AI for X" SaaS?
Short term: there's money there.
Medium term: some of those companies will be meaningful.
But long term, I don't think that's the main story.
Retrofitting AI into existing workflows feels to me like:
electrifying a factory but keeping the old layout
putting a website on top of a newspaper business
slapping ".com" on a catalog retailer in 1997
It's not useless. It's just not where the deepest compounding value is.
The deeper questions are:
What is the AI-native unit of production?
And what is the capital structure that best supports it?
My working answer:
The AI-native unit of production is the experiment, embedded in a research micro-factory that tightly loops bits and atoms.
The AI-native capital structure probably looks more like tokenized, composable claims on data, IP, and experimental programs than like traditional equity.
We're not there yet.
We're in the messy in-between.
But, if we zoom out, I think this is roughly the direction we're headed-towards new frontiers as global capital markets and the common knowledge pool(s) become more liquid, driving massive compression progress.
If you're thinking about or working on any of this - research factories, new capital formation for experiments, tokenized data/IP, or AI systems that sit inside these loops - I'd love to compare notes.

