Every generational platform is judged, in the end, by a single question: what can a person now do that they could not do before? The printing press gave ordinary people access to other minds. The PC gave them a workshop. The internet gave them a distribution network and a library. Each shift was measured not by the capability of the underlying machine but by what the human at the other end could suddenly reach.
The current shift is the same question with new parts. Frontier models are the engine. But the engine is not the product, and it has never been the product. The product is the harness — the runtime that decides what context the model sees, what tools it can call, what it remembers across sessions, and how its outputs become actions in the world.
That distinction is where the interesting companies of this cycle are being built. It is also, we think, where the moat is.
Commodities and moats
It is easy to confuse a capability with a product. OpenAI, Anthropic, Google, and half a dozen open-weight labs are in a race whose winner, at the model layer, is increasingly hard to name. The benchmarks converge; the prices fall; the APIs look more alike each quarter. Intelligence, in raw form, is becoming a commodity. That is a remarkable sentence to write in 2026, and it is also — from where we sit — obviously true.
Commoditization is not failure. It is the precondition for the next layer of value. When compute commoditized, the money moved to the cloud control plane. When storage commoditized, it moved to databases and then to analytics. The pattern is boring and reliable: as the layer below you gets cheap and interchangeable, margin accumulates above it, in the layer that uses it well.
For AI agents, the layer above is the runtime. Specifically: context, memory, tools, evaluation, and the orchestration glue between them. None of these are new ideas. All of them are early.
What a harness does
A harness answers four questions the model cannot answer on its own:
1 · What should I know?
A capable model with no context is a consultant on their first day. The harness is the onboarding packet. It chooses which documents, past conversations, and live signals to surface — and, just as importantly, what to leave out. Retrieval is the visible part; the invisible part is curation, chunking, and the decision of when a new piece of context displaces an old one.
2 · What can I do?
Tool use is the edge where a language model becomes a system. A harness defines the permission boundary: which APIs can be called, with which arguments, with what cost, under whose authority. Get this wrong and you either have a very expensive toy or a very fast way to produce a liability.
3 · What do I remember?
Memory is how trust is earned. A human colleague who remembers your preferences, your projects, and your standards is meaningfully more useful than one who doesn't. The same is true for agents, and the engineering problem — long-horizon state that is cheap to write, fast to read, and selective enough not to poison future sessions — is genuinely hard.
4 · How do I know it worked?
Evaluation is the least glamorous layer and the most important. Without evals, shipping is superstition. Runtime telemetry, human-in-the-loop scoring, and synthetic adversarial suites are where a real product separates from a demo. This is also where the best teams are spending their time.
Why now
Three conditions have arrived at the same time. Models are cheap enough that a product can afford to call them often. They are good enough that the bottleneck is no longer the model but everything around it. And they are similar enough that a product built on one can, with effort, be ported to another. The last point is the one founders underweight: model-portability, once a weakness of your product, is now a feature — it means your leverage lives in your harness, not in any particular vendor's roadmap.
The analog we keep returning to is the database. SQL the language commoditized decades ago. The interesting companies in data are not the ones that invented new query languages; they are the ones that built the runtime around SQL well — Snowflake, Databricks, Postgres-the-company, Supabase. The query is the commodity. The engine is the product.
We think the next decade of AI-native software will rhyme with that history. The model is the query. The harness is the engine.
What we're looking for
At Priors, we fund three flavors of this bet. Teams building context infrastructure: retrieval that actually works at the scale of a real user's life. Teams building memory: long-horizon state that earns trust through use. And teams building runtimes for specific verticals — where the harness is shaped by the domain (law, medicine, engineering) and the moat is the accumulated taste of a team that has watched a million runs.
We are less interested in wrappers and more interested in engines. We are less interested in new models and more interested in the scaffolding that makes any model useful. We are, to put it plainly, looking for founders who have understood that the question is not how smart is your model but what can a person now do.
That question is the only one that has ever mattered. The runtime is how it gets answered.