Somewhere along the way, the phrase “minimum viable product” (MVP) got corrupted to mean “what’s the crummiest thing we can get away with taking to market,” usually in order to be there first or beat some other arbitrary deadline. And wow, that’s too bad, because the MVP concept is actually super-useful in refining designs (and reducing risk) as part of an overall product roadmap.
Design is all hypothesis. You have ideas and assumptions about how a product can meet a goal, and you execute against those assumptions. In good hands, those assumptions are backed by research and experience, but even then, the design hypothesis won’t be proven out until it’s actually put to the test with real users. The best thing you can do is to find ways to prove/disprove that hypothesis as early as possible, and hopefully with a minimum of expense or effort.
As my pal Josh Seiden likes to ask: “‘what’s the smallest thing I can do or make to test this hypothesis?’ The answer to this question is your minimum viable product, or MVP.” The MVP process, in other words, is not about the take-to-market product but rather low-fidelity prototypes that let you test assumptions and make adjustments. Depending on your challenge, a minimum viable product could consist of prototypes as basic as:
- A placeholder landing page describing the product (will people buy this?)
- A SMS text interaction (would a bot be useful for this?)
- Price experiments (what will people pay?)
- Manual/analog service, before building fully automated system (will the service work to meet a real demand?)
All to say: the MVP is not the minimum thing you need for sales, but rather the minimum thing you need for learning. Iteration and revision is implicit in the whole concept. The MVP should be only a stepping stone to the final product, yet development too often stops at the MVP as an end itself.
Lars Damgaard recently shared his thoughts about why so many organizations embrace the term MVP without embracing its iterative process:
This is not how most large corporate organisations work. Like it or not, a lot of corporate organisations work in waterfalls with upfront feature specs, fully-fledged design and harsh deadlines. Also known as building a spaceship.
What often happens in these situations is that the spaceship is heavily reduced when the deadline approaches and a flawed product goes live with no budget for further iterations, or for establishing a learning cycle. Or alternatively it’s implemented in phases where the first launch happens to be whatever happened to be finished by the deadline. Both of which are also known as pissing the team off. If this is the case, sprinkling some half-baked MVP on top of that process won’t get you anywhere. On the contrary, it might give a false sense of control when essentially the entire organisationsal structure and product development processes are what need to be adjusted.
What needs to be in place though are clear measurable objectives and KPIs and a clear definition of viability, which of course depends on the business plan.
It’s nuts to me how many design projects don’t have clearly stated goals and design principles at the outset. In our projects, our first step is to establish clear consensus about what the main goals for the business are, the user needs it must address, the assumptions we have about how to achieve them, and how we’ll test and correct for those assumptions. That’s the foundational work that makes the rest of the project move quickly and reliably.
Admitting that you have only a hypothesis and not (yet) a solution is a scary and vulnerable thing to do. But it’s not nearly as scary as baking that “solution” into a product before testing the hypothesis behind it. That’s the real and legitimate value of the MVP.