May 7, 2026

The Encoding Crisis

We can know more than we can tell. — Michael Polanyi, The Tacit Dimension, 1966

I. The senior engineer leaves a comment

A senior engineer at a bank reads a junior’s pull request. She sees a database call inside the service layer — the kind of thing that breaks the audit trail, lands you in front of a regulator, and costs the bank a week of remediation. She types one comment: “we don’t write SQL in services here. Repository pattern, parameterized at one boundary.”

The junior fixes it. Six months later, the junior catches the same mistake on someone else’s PR. Two years later, the junior is the senior, and the comment has become muscle memory.

That comment moved through hands. It travelled in moments. It absorbed.

Now it’s 2026, and the work is done by an agent.

The agent reads Confluence — out of date by eighteen months. The agent does not read the senior’s mind. The agent writes SQL in the service layer. The senior corrects it. The next agent on the next PR makes the same mistake. The agent never absorbs anything.

It can’t.

II. Why this hurts now

Tacit knowledge has always been the bottleneck. Before agents, we had a way around it: humans absorbed it, slowly, through co-working. Confluence was a graveyard, and we knew it was a graveyard, and the system worked anyway because the senior engineer was in the room.

Agents removed the senior engineer from the room.

To me, this is the quiet shift no one has named. Everyone is talking about model capability. Everyone is benchmarking reasoning. The actual gap — the thing that makes a $300K-per-head engineer spend ten hours a week reviewing the same mistake — isn’t reasoning. It’s encoding.

The senior knows things that aren’t written down. The agent only knows things that are written down. Between those two facts is where every AI productivity gain gets eaten by a quality regression.

The CTO opens the board deck: “We spent five million on Cursor and Devin. Where’s the productivity?” The honest answer is — the productivity is there, but the agents don’t know our org, so the seniors are babysitting them, so the net is negative, so the math doesn’t work yet.

That math isn’t going to fix itself with bigger models. Bigger models give you better guesses. They don’t give you your bank’s audit pattern. They don’t give you the rule the senior has corrected three hundred times in five years.

This is the encoding crisis. Tacit knowledge worked when humans did the work. Agents need it explicit, machine-readable, present at runtime.

III. The cookbook problem, revisited

Cooking was once tacit. A grandmother knew when the dough was right because of how it felt. “Knead until smooth” meant something to her hands.

The cookbook arrived, and cooking became transferable. “Two cups flour, one teaspoon salt, bake at 350 for thirty minutes.” You could hand the cookbook to anyone and produce something edible. The recipe was the encoding.

But the cookbook still trusted the human. “Season to taste.” “Until golden brown.” These instructions assume a reader who can interpret. A reader with grandmother somewhere upstream.

Now imagine a cooking robot.

The robot can’t taste. The robot can’t see “golden brown.” The cookbook fails. The robot needs: four grams salt, two grams pepper, mix at sixty rpm for ninety seconds, surface temperature one hundred sixty-five Celsius.

Same knowledge. Different encoding.

This is the move. The senior engineer is the grandmother. Confluence is the cookbook. The agent is the cooking robot. Confluence trusts the reader to interpret. The agent cannot interpret — it can only execute against rules it can read.

The thing your agents need is the cookbook the robot can follow.

IV. The schema as executable knowledge

Here is what one rule looks like — encoded:

- id: sk-001
  name: No raw SQL in service layer
  trigger: agent generates SQL outside /repository/ directory
  enforcement: reject + suggest repository pattern
  rationale: audit trail; parameterized query at single boundary
  bad:  "service/payment.go: db.Exec('INSERT INTO transactions...')"
  good: "service/payment.go: repository.Transactions.Insert(...)"

Six fields. One rule. Machine-readable. The agent reads this before it writes the SQL, sees the trigger pattern, refuses, suggests the repository call. The senior’s comment, encoded once, applied forever.

This is what I mean by executable knowledge. Not prose. Not policy decks. Not Confluence. Rules a machine can read, evaluate, enforce — at runtime, on every action, without anyone needing to be in the room.

The schema is the encoding. The schema is what the cookbook for cooking robots looks like.

V. The compound

Here’s the part that’s easy to miss.

Cookbooks compound. McDonald’s didn’t scale because the burger was good — the burger is fine. McDonald’s scaled because the operations manual scaled. Six hundred pages: patty thickness, grill temperature, salt grams, lettuce shred timing, sauce dispenser pressure. Improve the manual once and forty thousand restaurants get better at the same time.

Without the manual, McDonald’s is forty thousand separate restaurants with forty thousand separate chefs.

The schema is the manual.

When the first bank installs it, the schema starts at fifteen entries. When the third bank installs it, the schema is at sixty entries — and the third bank inherits the work of the first two. Their senior engineers didn’t have to encode the SQL-in-services rule. It was already there.

The thing about a schema is that the more orgs adopt it, the more the schema knows. It compounds across boundaries that data cannot cross. The bank’s source code stays in the bank. The patterns — anonymized, abstracted, encoded as rules — travel.

To me, this is the actual category. Not “AI governance.” Not “prompt engineering.” The compounding schema. The cookbook for cooking robots that gets smarter every time another kitchen adopts it.

VI. KERN (Knowledge Engineering for Runtime Norms)

Today I am publishing the first version of the schema spec on GitHub, under Apache 2.0.

It is called KERN (Knowledge Engineering for Runtime Norms). Three artifacts ship together — the format definition, a reference example drawn from my own work over the last eighteen months, and a contribution path for anyone who wants to add their organization’s rules to the public baseline.

The seed is kern-skills-v0.yaml. Fifteen entries, encoded from corrections I have issued to my own AI agents over a year and a half. Things like file references must use resolvable paths and declare commit scope before any push and never push to git without explicit instruction. Boring rules. Specific rules. The kind a senior engineer would type into a comment and then type again three months later.

I have been running this seed on my own work for the last few days. It works. The corrections I used to issue ten times a day, I now issue twice. That is not a benchmark. That is lived behavior.

KERN is the schema. KernPath is the reference implementation we are building. The spec is open, the implementation will be open, and anyone who builds against the schema can build their own runtime, their own validators, their own dashboards. The schema is what we want to win the next ten years.

VII. What I am asking for

Read the spec. Tell me what is missing. Send a pull request with your industry’s rules.

Implement the schema. The format is open; the lift is small. Every agent invocation that conforms produces a record an auditor can read.

The spec is at github.com/narenkatakam/kern-spec. The reference example is the first thing you will read. There is a contributing file. There is a changelog. There will be a v0.2 in two weeks.

This is not a product launch. This is a category opening.

VIII. A closing note

We can know more than we can tell. Polanyi was right in 1966, and he is still right today. But the gap between knowing and telling — the gap that used to be absorbed silently by humans co-working — is the gap that AI agents now have to cross explicitly, in writing, in machine-readable form.

The senior engineer’s comment was once a moment.

We are turning it into a permanent record.