Frameworks in motion — updated April 2026

Building

Three frameworks I keep coming back to when building AI-intensive 0→1 products. The first is the flagship — the six-pramana framework from the Vedantic AI paper. The other two are supporting patterns that keep showing up in every 0→1 sprint I run.

Below each framework is how it's showing up in what I'm building right now — not as theory, but as the operating system underneath the work.

Flagship

The Six-Pramana Framework

How AI systems should know what they know

Six Pramanas — the Vedantic framework for means of valid knowledge pratyakṣa Perception anumāna Inference upamāna Comparison śabda Testimony arthāpatti Postulation anupalabdhi Non-apprehension Valid Knowledge

Hallucination isn't a bug to patch. It's what happens when a system has no framework for validating its own outputs. Classical Indian epistemology (Vedanta) spent two thousand years solving exactly this question — by what means can we validly know anything? The answer was the six pramanas: perception, inference, comparison, testimony, postulation, and non-apprehension.

Each pramana maps cleanly onto an AI architecture concept:

  • Pratyakṣa (perception) — retrieval, direct observation, raw input.
  • Anumāna (inference) — reasoning chains, logical derivation.
  • Upamāna (comparison) — analogy, few-shot examples, similarity-based generalization.
  • Śabda (testimony) — source alignment, citations, trusted corpora.
  • Arthāpatti (postulation) — assumption-under-constraint, hypothesis formation when direct evidence is absent.
  • Anupalabdhi (non-apprehension) — the absence-check, the neti-neti, the knowledge that something is not there.

The practical rule: every high-stakes output should be tagged by which pramanas support it, and require at least two means of knowing. Single-pramana verification is Gettier-fragile — you can be right for the wrong reason. RAG without absence-checking is confidently wrong half the time. Reasoning without source alignment is plausible-sounding fiction. Source alignment without inference is parroting.

Anupalabdhi is the most-skipped pramana in AI today. It's the absence-check that catches hallucinations before they propagate. When a citation can't be located, when a fact has no grounding in the corpus, when a chain of reasoning depends on a step that isn't there — anupalabdhi is the structural primitive that flags it. Build it in first.

Together, the six pramanas turn hallucination from an engineering problem into an epistemological discipline. You're no longer asking "is this output wrong?" You're asking "by what means do I know this is right?" That second question is the one that compounds.

Why this is the product moat: the model is a commodity in 18 months. The epistemology is not. Whoever builds the production framework for multi-pramana verification — who tags every output by means of knowing, who treats anupalabdhi as a first-class primitive, who refuses to ship single-pramana confidence — owns the trust layer of every AI-intensive product downstream. That's where the real moat sits, and it has nothing to do with whose model you fine-tuned this quarter.

For technical founders building 0→1: don't compete on model quality. Compete on epistemological honesty. The first product in your category that can credibly say "here's how this output was validated, and here's what we don't know" — wins the trust battle. The pramanas are how you say it.

Emerging

KERN — Knowledge Engineering for Runtime Norms

An open schema for the operating rules AI agents read at runtime

Tacit knowledge has always been the bottleneck. Before AI agents, humans absorbed it slowly through co-working — a senior engineer's PR comment becoming the next engineer's muscle memory. Agents can't absorb. They read what's written; they miss what isn't. That gap — between what the senior knows and what the agent sees — is the encoding crisis.

KERN is the encoding layer for that gap. A machine-readable schema for the rules a senior engineer would teach a junior in code review — expressed in YAML so an agent can read, evaluate, and enforce them at runtime, and an auditor can read the trace after the fact.

Spec v0.1 is published under Apache 2.0. The reference implementation, KernPath, is in build. Every regulated organization running AI agents will eventually need an explicit operating-knowledge schema. KERN is the canonical format.

Supporting pattern

Thin-slicing for 0→1

Every founder asks the same question in different words: how do I validate the full vision without building the full thing? Thin-slicing means cutting end-to-end, not feature-by-feature. One input, one output, one stakeholder — everything in between has to work, even if only at depth-one. I've applied it across consumer, health-tech, fintech, and cross-border payments. The aviation case is where the stakes sharpened the method fastest — one expensive problem (hard landing prediction) carried the full platform vision and unlocked nine-figure funding.

Case study — Aviation Data Platform — Thin-Slicing for 0→1 ↗

Supporting pattern

Agentic architecture for personal sites

A site should speak two languages: one for humans, one for agents. narenkatakam.com does both — static pages for people, MCP server with RAG endpoint for machines. The pattern generalises: any knowledge-heavy surface benefits from an agent interface. Cost is small (one Cloudflare Worker, one D1 database, one Vectorize index). The composability gain compounds — agents can query your knowledge as easily as humans can read your prose.

Case study — This site's MCP server ↗

In motion

These frameworks aren't theoretical — they're the operating system underneath what I'm building right now.

Work

Product Principal at ThoughtWorks

Currently delivering for a healthcare client. Building intelligent products at the intersection of data, AI, and human-centered design. Based in Amsterdam.

Building

Digital presence & open source

Shipping narenkatakam.com — an agentic personal site with live RAG, an MCP server, and an edge-native eval pipeline. Running a small legal-AI citation-validation experiment, launching soon. Maintaining the UX Audit Skill (design eye for AI coding assistants) and FX Export (WebGL visual tool).

Thinking

AI product patterns

Developing frameworks for building intelligent products. Documenting the honest journey from ideas to shipped artifacts. Writing a periodic build log on Thoughts.

Learning

Code as a creative medium

Building coding skills through Claude Code. Bridging product vision to working software. Strong design and systems instincts, growing engineering muscle.

Principles

Systems over features — architect feedback loops, not screens.

First principles over patterns — question before you build.

Pramana over assertion — know by what means you know.