MVP: Stop Shipping Features, Start Killing Risk

Most teams don’t fail because they ship too slowly. Speed is rarely the real villain. In fact, many teams move fast, hit deadlines, close sprints, and still end up nowhere meaningful.

They fail because they ship the wrong thing with high confidence.

That’s the dangerous part. Not ignorance, but confident ignorance. Polished features. Clean architecture. Well-run ceremonies. And underneath it all, an assumption that was never truly tested.

An MVP is not “version 0.1 of your full product.” It’s not a lighter UI. It’s not fewer features. It’s not your roadmap, compressed.

An MVP is the cheapest test that can validate or kill your riskiest assumption.

Notice the word kill. If your “MVP” can only confirm your idea but cannot clearly invalidate it, then it’s not an experiment. It’s a performance. You’re acting like you’re learning, but you’re actually protecting the idea.

A real MVP creates tension. It forces a binary moment. Did we get the signal we defined in advance, or not?

If your MVP cannot produce a clear decision (build, pivot, or stop), then it isn’t an MVP.

It’s just a smaller backlog wearing experimental clothing.

The only definition that matters

Use this sentence as your quality gate:

An MVP is the minimum effort needed to trigger a high-signal learning event.

It is like playing Marco Polo: you want to catch the other team as fast as possible with closed eyes and only shouting “Marco!” (your high-signal). As soon as you ear them shouting back “Polo!” (your decision metric), you can understand how far you are from the goal.

High-signal means you can clearly decide what to do next:

  • Persevere (Double Down): The hypothesis is validated. Focus on reducing friction in the core loop, improving retention, and preparing for the next stage of growth or complexity. You are getting closer to the goal and you need to move in that direction.
  • Pivot (Change Direction): The core value is there, but the delivery or target is off. This could be a Customer Segment Pivot (right product, wrong user) or a Value Prop Pivot (right user, wrong problem). You are getting further from the goal, so you need to change direction.
  • Stop (Kill the Bet): The signal is clear: the market doesn’t need this, or the economics don’t work. Celebrate the “fail-fast” and redirect resources to a more promising hypothesis.

You can use the following 5-step execution loop to validate your MVP:

  1. Name the single riskiest assumption
    Example: “Ops managers will pay €99/month to cut weekly reporting time by 30%.”

  2. Pick the cheapest credible experiment
    Don’t code by default. Start with the lightest test that can prove behavior.

  3. Define one decision metric
    Not pageviews. Use behavior tied to value: activation, retention intent, willingness to pay.

  4. Set a threshold before running
    Example: “If <25% of qualified users complete the core action, we pivot.”

  5. Make the call fast
    No post-rationalization. Data beats opinion.

Types of MVP (and when to use each)

There are different types of MVPs, each with its own characteristics and use cases. Here are the most common types:

Smoke Test MVP (landing page + CTA)

Best for: demand validation, message-market fit
Signal: click/signup/waitlist from the right ICP
Risk: false positives if traffic is untargeted

A smoke test for an MVP is a pre-development validation technique used to measure market demand before building a product. It involves creating a landing page with a clear value proposition, ad campaign, or prototype that simulates a product’s value proposition to gauge user interest through clicks, sign-ups, or “buy now” button interactions.

Concierge MVP

Best for: understanding workflow + willingness to pay
Signal: repeat usage and explicit payment intent
Risk: doesn’t test scalability (by design)

A Concierge MVP is a strategy where you deliver a service manually to a small group of users, simulating an automated, high-tech solution to validate a business idea without upfront software development. By providing a highly personalized, hands-on experience, you learn directly from customer feedback and prove demand before investing in automation.

You deliver value manually behind the scenes.

An example can be Food on the Table: The founder manually created grocery lists and meal plans based on user preferences before building a system.

Wizard of Oz MVP

Best for: testing UX and perceived value before heavy engineering
Signal: completion of core flow, return behavior
Risk: operational overhead if prolonged

A Wizard of Oz MVP is a product strategy where a, functional, user-facing interface is presented, but all backend processes are performed manually by humans, not software. This allows startups to test, validate, and gather data on new ideas without investing in, expensive, full-scale automation.

Looks automated, but part of the system is manual.

An example can be Zappos: Founder Nick Swinmurn took pictures of shoes in local stores, posted them online, and bought/shipped them manually when orders arrived.

Single-Feature MVP

Best for: validating core product utility
Signal: activation + repeated use of that one feature
Risk: teams sneak in “just one more feature”

A single-feature MVP consists in launching with only one core, high-impact feature to solve a specific problem, allowing startups to validate business ideas, reduce initial development costs by up to 60%, and gather targeted user feedback. It focuses on testing viability and desirability by delivering immediate value to early adopters, rather than building a complex, fully-featured app.

One painful job-to-be-done solved end-to-end.

An example can be Foursquare: Began solely as a check-in application.

Pre-sale MVP

Best for: validating commercial intent early
Signal: money, signed commitment, pilot start date
Risk: requires clear scoping and trust

A pre-sale MVP (Minimum Viable Product) is a strategy to validate market demand by selling a product, or the promise of one, before it is fully built. Using techniques like crowdfunding, landing pages, or demo-based sales, it tests willingness to pay, minimizes financial risk, and gathers crucial feedback to guide development.

Sell before building fully (paid pilot, LOI, deposit).

While both are low-fidelity validation techniques used before building a full product, a Pre-sale MVP is a “high-commitment” version of a Smoke Test MVP. A Pre-sale MVP takes the validation a step further by actually collecting money. It proves that customers don’t just like the idea—they are willing to pay for it now to get it later.

The Smoke Test MVP acts like a “lure” to see if anyone is interested. You might run an ad to a landing page for a non-existent product. If they click “Buy Now,” they get a message saying, “We’re not launched yet, join the waitlist!”.

No-Code / Prototype MVP

Best for: flow validation and usability testing
Signal: task completion and friction points
Risk: can mask real performance constraints

No-code MVPs allow entrepreneurs to rapidly build and launch functional prototypes—often in days rather than months—to validate business ideas without writing code. By using drag-and-drop tools like Bubble, Adalo, or Softr/Airtable, you can create cost-effective, iterative MVPs, significantly reducing the risk of failure and allowing for quick user feedback.

Figma, no-code stack, or scripted backend for fast iteration.

Steps to Build a No-Code MVP

  1. Define the Core Value: Identify the single most important problem your product solves.
  2. Map User Journey: Sketch the essential screens and actions.
  3. Select Tools: Choose tools that match your technical comfort level and functional needs (e.g., Airtable for data, Bubble for logic).
  4. Build & Launch: Assemble the MVP, focusing on functionality over design.
  5. Iterate: Gather user data, test, and improve the product continuously.

Content MVP

Best for: thought leadership + problem resonance in niche markets
Signal: qualified inbound, replies, discovery calls
Risk: engagement vanity if audience is too broad

A Content Minimum Viable Product (MVP) is the simplest, most essential version of a content strategy or asset designed to test audience engagement with minimum effort and investment. It helps creators validate topics, formats, and value propositions through quick, iterative feedback loops before investing heavily in production.

Teach the solution before building it (article, demo video, teardown, webinar).

Quick selector: choose the MVP type by risk

This table can be very useful to decide which type of MVP to build, based on the risk you are willing to take, the question you want to answer and the signal you want to receive.

Question MVP Type Signal Risk
“Do they care?” Smoke test / Content MVP Qualified inbound, replies, discovery calls Engagement vanity if audience is too broad
“Will they pay?” Concierge / Pre-sale MVP Money, signed commitment, pilot start date Requires clear scoping and trust
“Can they use it?” Prototype / Wizard of Oz Task completion and friction points Can mask real performance constraints
“Will they come back?” Single-feature thin slice Activation + repeated use of that one feature Teams sneak in “just one more feature”

Anti-patterns to avoid

Everything can be done wrong, even MVP. Here are some common anti-patterns:

  • “MVP” that includes half the roadmap
  • Success criteria defined after seeing results
  • Vanity metrics (impressions, generic traffic)
  • No explicit kill criteria
  • Team emotionally attached to solution, not problem

Practical example

Let’s make this less abstract and more real.

Imagine you wake up with the idea: “I want to build an AI reporting assistant for e-commerce operators.” In your head it’s beautiful. Clean dashboards. Smart insights. Maybe even predictive suggestions. It feels like a SaaS already.

Now, there are two very different roads you can take.

The seductive one — the “I am building a real company” path — looks like this:

You start by setting up authentication. Then a proper dashboard. Then Stripe billing. Then user roles. Then admin panels. Then integrations with Shopify, WooCommerce, Meta Ads, Google Ads. You spend weeks polishing infrastructure. It feels productive. It looks serious.

But here’s the uncomfortable truth: none of that proves anyone actually wants the thing.

You’ve bought complexity before buying certainty.

The alternative path is humbler. Almost embarrassingly simple.

Recruit some (~10) qualified e-commerce operators. Not random people. Real operators with real revenue, real reporting pain.

For two weeks, don’t build the product. Run a concierge workflow. You manually pull their data. You manually generate insights. You maybe use AI tools behind the scenes. But to them, it feels like a service.

At the end of the two weeks, you ask a brutally clear question:

“Would you continue next month on a paid plan?”

You define a threshold in advance. For example: at least 4 out of 10 must commit to paying.

If the answer is yes, you don’t automate everything. You automate the single highest-friction step first — the one that consumed most of your manual time.

Notice what happened here.

You didn’t start by building a product. You started by testing a market.

You didn’t optimize code. You optimized for signal.

The goal isn’t to look like a startup. It’s to reduce uncertainty. Every early decision should be judged by one question: “Does this increase my confidence that people will pay?”

This is how you earn the right to build complexity. Not by faith. Not by vibes. By evidence.

And there’s something almost philosophical here: complexity is cheap for engineers. Certainty is expensive. So the disciplined move is to buy certainty first.

Once someone is willing to pull out a credit card, architecture suddenly becomes a very good problem to have.

Final pre-build checklist

Before you open your editor. Before you sketch the system diagram. Before you convince yourself that “this time it’s different.”

Pause.

Most early-stage mistakes don’t come from bad code. They come from unexamined assumptions. We fall in love with an idea, then we quietly start protecting it. We replace questions with features. We replace doubt with architecture.

An MVP isn’t a smaller product. It’s a structured experiment.

So before building anything, run through this checklist with almost uncomfortable honesty:

  • What assumption are we testing right now?
  • What metric decides the outcome?
  • What threshold triggers pivot/kill?
  • What is the cheapest valid test?
  • How fast can we run the next loop?

If these answers are fuzzy, you’re not doing MVP. You’re doing hope-driven development.

And hope, while emotionally satisfying, has a terrible conversion rate.


References:

  • Eric Ries, The Lean Startup
  • Don McGreal, Ralph Jocham, The Professional Product Owner