Preprint · SSRN · PhilPapers · Zenodo DOI 10.5281/zenodo.19410556

Published 2026-04-11

Vedantic AI: a six-pramana framework for hallucination

Abstract

Large language models hallucinate because they have no framework for distinguishing valid knowledge from plausible-sounding output. Every fix so far — RAG, reflection, chain-of-thought, fine-tuning — treats hallucination as a bug to patch. This paper argues it's a category error. Hallucination is not an engineering failure; it's an epistemological vacuum that engineering keeps trying to paper over.

Classical Indian epistemology (Vedanta) spent two thousand years asking the same question AI is now asking: by what means can we validly know anything? The answer was a six-fold framework — the pramanas — covering perception, inference, comparison, testimony, postulation, and non-apprehension. Each pramana maps cleanly onto AI architecture: retrieval is pratyakṣa (direct observation), reasoning chains are anumāna (inference), few-shot generalization is upamāna (comparison), source alignment is śabda (testimony), constrained hypothesis is arthāpatti (postulation), and — most under-used — anupalabdhi (non-apprehension) is the check that notices when a claim has no grounding.

Single-pramana verification is Gettier-fragile: a system can be right for the wrong reason. The paper proposes that AI outputs should be tagged by which pramanas support them, and that multi-pramana verification — at least two independent means of knowing — is what turns confident hallucination into honest uncertainty. Anupalabdhi is the missing primitive in production AI today; it's the absence-check that catches hallucinations before they propagate.

The implications extend beyond LLMs: to citation systems where each cite must hold under multiple pramanas, to agent memory where recall must be tagged by how the agent knows, and to any system that wants to know how it knows.

Written for technical founders building AI-intensive products 0→1 — because the epistemology is where the product moat lives, not the model.

The argument

  1. Hallucination isn't a bug — it's what happens when a system has no framework for validating its own outputs.
  2. Vedantic epistemology spent 2,000 years solving this problem; the six pramanas map cleanly onto LLM architecture.
  3. Production AI should tag claims by pramana and require ≥ 2 means of knowing per high-stakes output.

Framework

Six Pramanas — the Vedantic framework for means of valid knowledge pratyakṣa Perception anumāna Inference upamāna Comparison śabda Testimony arthāpatti Postulation anupalabdhi Non-apprehension Valid Knowledge

Applied in

TrustCitation

Production citation system using pramana-based verification for legal briefs.

This site's MCP server

POST /api/ask — six-pramana tagging on every answer; queryable by any agent.

Memoria (local-only)— private infra

Agent memory graph with pramana-tagged recall. Not public infrastructure.

Read & cite

BibTeX

@misc{katakam2026vedantic,
  title  = {Vedantic AI: A Six-Pramana Framework for Hallucination in Large Language Models},
  author = {Katakam, Naren},
  year   = {2026},
  month  = {April},
  doi    = {10.5281/zenodo.19410556},
  url    = {https://doi.org/10.5281/zenodo.19410556},
  note   = {Also available on SSRN (abstract 6518640) and PhilPapers (KATTSM-2). arXiv preprint cs.CL under moderation.},
}

Follow

Stay close to the work

Future papers, build logs, and updates on the pramana framework — delivered low-frequency.

Double opt-in. Unsubscribe any time. No third-party trackers.