March 1, 2018

Thin-Slicing an Aviation Data Platform — How One Use Case Validated a 90-Airline Roadmap

For 0→1 technical founders

Every founder I talk to asks the same question in different words: “How do I validate the full vision without building the full thing?” The answer is thin-slicing — but most people do it wrong. They slice by feature, not by end-to-end flow. The slice has to prove the full architecture works, even if it only handles one customer, one input, one outcome.

I’ve applied this across consumer, health-tech, fintech, cross-border payments, and government. The aviation case is the one where the stakes made the methodology sharpen fastest. If you’re building AI-intensive products 0→1, this is the pattern worth stealing — and the failure modes worth knowing before you find them yourself.

What follows is the long-form case study. Skip to What to steal at the end if you want the operating principles.


The Problem

A global aviation manufacturer had a data lab with a bold thesis: combine sensor data, weather patterns, and historical flight records to predict safety incidents before they happen. The ambition was real. The execution clarity was not.

Their immediate customer was one major Asian airline, but the roadmap called for scaling to 90+ airlines globally. The core technical challenge — predicting and preventing hard landings, which cost airlines millions per incident — was well understood. What wasn’t understood, and the cause of the ambiguity, was: what to build first, who owns the data, and how to carve a viable product from a sprawling data ecosystem.

This is a pattern I see constantly in data platform projects. Organizations invest heavily in data infrastructure (and AI capability now), then stall because nobody applied product thinking to the problem. They have data scientists, engineers, and executives — but no clear answer to: “What’s the smallest thing we can build that proves the value of this entire platform?” That was the question I was brought in to answer.

It was also the question that taught me thin-slicing — the methodology that has shaped every 0-to-1 project I’ve done since.

The Approach

I started where I always start with ambiguous 0-to-1 problems: with the people, not the technology. The data lab had extensive sensor feeds and ML models in development, but nobody had mapped how safety officers and flight operations managers actually made decisions day-to-day. Technology without workflow context is just expensive infrastructure.

Hero’s Journey Workshops came first. Structured sessions with Safety and Flight Operations departments to map their actual process — friction points, decision moments, information gaps. This wasn’t about what they said they needed. It was about watching where they hesitated, where they switched between systems, where they relied on gut feel because the data wasn’t accessible. From these sessions, I built detailed personas grounded in observed behavior, not assumed needs.

Value Matrix Development came next. I generated and mapped use cases against two axes: business impact and data feasibility. This is where most data platform projects go wrong — they prioritize by technical impressiveness rather than business value. The value matrix forced hard conversations with stakeholders about what mattered most. Hard landing prediction emerged clearly as the highest-value, most-feasible starting point. It had quantifiable cost impact, available data streams, and a user base that would adopt it immediately if it worked.

Then thin-slicing into a data platform MVP. Rather than prototyping the entire platform vision, I built quick prototypes of the hard landing prediction workflow only — enough to validate critical assumptions with real users, get stakeholder alignment, and estimate development effort. The full platform vision (marketplace, cross-source analytics, conversational queries) became the strategic roadmap — a vision pitch to acquire budget, not the launch plan. Each subsequent release was mapped to specific value milestones, with the multi-airline expansion gated on proven adoption metrics from the first airline.

The Antifragile Core

Aviation safety has an underappreciated antifragile property — in Nassim Nicholas Taleb’s sense (Antifragile): every hard landing, every near-miss, every incident generates data that makes the prediction model more accurate. The system learns from failure by design. The platform was built to exploit this — each incident fed back into the prediction models, reducing the probability of the next one.

This inverted the usual relationship between incidents and value. Most products degrade under stress. A hard landing prediction system gets better under stress. The worse the safety record, the stronger the data foundation for preventing the next incident. This meant the platform wasn’t just solving a problem — it was building a compounding asset. Every flight, every anomaly, every data point reduced the entropy of the system, converting noise into signal.

The multi-airline expansion roadmap amplified this. Each airline that joins the platform adds its flight data to the collective model, making predictions more accurate for all airlines. The 91st airline gets a better product than the 1st airline did, because it inherits the intelligence accumulated from 90 others. Classic network effect — and the reason the first beachhead airline formed a partnership: to reap the benefits of platform economics as the network grew.

What We Deliberately Didn’t Build

Thin-slicing is fundamentally an act of subtraction. The value matrix forced us to remove features from the roadmap that were technically exciting but business-value-low. What we chose not to build defined the MVP as much as what we included:

  • Didn’t build the data marketplace in v1. The most exciting feature in the room — third-party analytical products, cross-airline benchmarks, monetizable data assets. This belonged in the vision pitch to acquire budget. The marketplace required solved problems (data governance, multi-airline onboarding, pricing models) that we hadn’t earned yet.
  • Didn’t build cross-source analytics before proving single-source value. The vision called for integrating weather data, maintenance logs, sensor feeds, and airport metrics into a unified analytical layer. We sequenced the ambition: prove that sensor data alone could predict hard landings, then layer in additional sources release by release. A strategic call to reduce noise and increase prediction accuracy.
  • Didn’t let technical impressiveness drive prioritization. The data scientists wanted to showcase the most sophisticated models. The value matrix forced business-value-first decisions. Some technically elegant use cases ranked low on business impact and got deprioritized.
  • Didn’t build the conversational interface for launch. The vision was right — natural language as the primary way to interrogate aviation safety data. But the foundation had to come first: clean data pipelines, proven prediction accuracy, adopted dashboards. You earn the right to build the elegant interface by proving the underlying intelligence works.

What We Built

The MVP centered on one specific, expensive problem: hard landing prediction for the Asian airline. The platform ingested sensor data from aircraft engines, weather patterns, historical landing records, location-specific variables, and airport-specific metrics to generate G-force predictions and risk alerts — delivered through a custom-built tablet app for safety officers.

The interface was designed around three core tasks that mapped directly to the personas: monitoring safety metrics through a real-time analytics dashboard with heatmaps and trend visualizations, conducting deep-dive analysis on specific incidents and patterns, and receiving actionable predictive alerts with mitigation recommendations integrated into existing operational workflows.

Beyond the MVP, the product vision included an AI-assisted conversational interface for natural language data queries (this was 2018, pre-LLM — we called it a chatbot), a data marketplace for third-party analytical products, cross-source data integration layers, and customizable role-based dashboards. This vision document secured funding and strategic buy-in. The roadmap structured these capabilities as value releases, each one unlocked by adoption and outcome metrics from the previous phase.

Results

  • 23% projected reduction in hard landing incidents — initial simulations predicted a 23% potential reduction. As an external consultant, my involvement ended after the build; we handed control back to the client team with clarity on what to measure and how. The exact production number wasn’t shared, but the team reported strong results and we secured funding for the next phase.
  • 4% reduction in unplanned aircraft downtime via predictive maintenance insights — substantial cost savings per aircraft.
  • Strategic roadmap secured $100M+ in funding for multi-phase expansion to 90+ airlines.
  • Data governance framework established — ownership agreements across stakeholders that had been blocking progress for months.

What I’d Do Differently Today

Two things stand out. First, I’d tackle data governance on day one, not as a parallel workstream. We ran product discovery and data ownership negotiations simultaneously, which created unnecessary drag. Today I’d make data ownership agreements a prerequisite gate before any product workshop. No clear data ownership, no product to build. Sequence matters.

Second, the conversational interface we designed in 2018 was limited by the technology of the era. Today, with actual LLMs and AI agents, I’d design the entire platform interaction model around natural language as the primary interface — not a feature bolted onto a dashboard, but the core way users interrogate their data. The dashboard becomes the visualization layer for AI-driven analysis, not the other way around. The fundamental interaction design was right; the technology has finally caught up.

The Methodology That Compounded

This project is where I first reinforced my belief in thin-slicing as a methodology: carving the smallest viable piece from an ambitious vision that proves the full value thesis.

Thin-slicing became my go-to methodology for every subsequent 0-to-1 project. I used it in pharmaceutical R&D (where I thin-sliced the product but missed the prerequisites), in government digital transformation (where I thin-sliced the agency scope), and in every platform project since. The methodology itself compounds: each application teaches a new failure mode.

But thin-slicing has a known failure mode: organizational impatience. When leadership funded the full vision, they expected the full vision or a substantial part of it. The phased approach, gated on adoption metrics, required continuous re-selling of patience and showcasing value at each phase. And the other failure mode I discovered in the next project: you can thin-slice the product but forget to thin-slice the prerequisites. The methodology is only as strong as the honesty about what else needs to be true for the slice to work.


What to steal

If you’re a 0→1 technical founder building an AI-intensive product, here are the operating principles:

  1. Slice end-to-end, not feature-by-feature. One input, one output, one stakeholder. Everything in between has to work, even if only at depth-1.
  2. Pick the slice where failure teaches most, not where failure is safest. The slice should compress your learning rate. In aviation it was hard landings — observable, consequential, bounded.
  3. Ship to one real user with real stakes. Internal dogfooding is a different game with different feedback loops. Real customers with real downside surface the failure modes that matter.
  4. Design the full vision to secure funding; ruthlessly thin-slice for delivery. The vision is the story you tell stakeholders. The MVP is the bet you actually make. Both are real artifacts. They serve different purposes.
  5. Thin-slice the prerequisites, not just the product. Data governance, regulatory clearance, distribution access — if these aren’t earned, the slice can’t ship. Sequence the unlocks.
  6. Pick problems with antifragile structure. Look for systems that get better under stress — where each failure improves the next prediction. Network-effect data products are the canonical case.

The methodology compounds — but only if you stay honest about what the slice actually proved versus what you’d hoped it proved. Most people skip that audit. Don’t.