But What's the MVP of the MVP?
You keep using that word. I do not think it means what you think it means.
Inigo Montoya, The Princess Bride
If you’ve been anywhere near product development, you’ve either heard or muttered (probably both!) something along the lines of “What’s the MVP of this?”, “We need this feature in the MVP.”, or “We can’t go live without that!”
When MVP is used this way, your perspective is wrong. The cart is in front of the horse. The problem is that we cannot objectively explain how we know we are right. How do we know that feature or this functionality needs to exist at all, let alone in the first release? How do we know what will make the product viable? And even this line of questioning is a mis-use of how MVP should be applied.
The MVP misunderstanding
MVP, despite the name, is not about creating minimal products.
Yes, product is the third word in the acronym, which may be why we tend to think about MVP in the narrow context of a marketable or releasable solution. Even much of what the Lean Startup community talks about is built on this assumption.
Using, or mis-using, MVP in this way invites jumping to a solution based on the perspective of one or a few individuals (who are probably not the end user).
Along the way to the perceived MVP solution, we miss several key opportunities to minimize risk, conserve resources, and build a customer base while figuring out what the solution really should be. Yes, we can build the airplane while flying it.
When we minimize the time we spend on exploring the problem and jump to a solution, even one we think is an MVP, we increase the risk of not actually solving the problem.
- The viability of the solution doesn’t happen until the end of the effort, once you ship it / launch it.
- The resulting product is built on untested, unproven assumptions.
- The end result is usually a product that exceeds the bare minimum requirements (aka viability) to achieve the objective.
When we use MVP in this manner, we do the concept — and those we work with - a disservice. The acronym is thought of with disdain and people stop using it. (Which, if you’re going to mis-use it, is this a bad thing?) Then the entire concept gets dropped, and you’re back to your waterfall processes launching expensive, unproven solutions to problems that may or may not exist.
Minimum Viable MVP (a definition)
It is vain to do with more what can be done with less.
William of Occam, Occam’s Razor
To better understand MVP, let’s define the term by starting with the end.
Product. This is the output of our work. Frequently this will be tangible - a landing page, a prototype, designs or mockups, even a working solution - but can also be observation time or interviews. This is the least restrictive part of MVP.
Viable. The threshold by which something is either useful for a given purpose, or it is not. This is binary. Your product is or is not viable.
Minimum. Viability is a spectrum, spanning from not at all useful to completely over-engineered. And since we all suffer from Complexity Bias, we need to qualify the extent that our solution is viable. By including minimum we are aiming at something that just barely meets our needs.
In practice, MVP should be used exclusively in the context of a Build-Measure-Learn cycle (from The Lean Startup methodology). And this cycle is focused on testing hypotheses. Thus, an MVP is the least amount of work we need to do to be able to empirically disprove our hypothesis. We use MVP, but that is because it’s much easier to say, and remember than MVPTCBUTDaH (minimum viable product that can be used to disprove a hypothesis).
MVP to MVMP
The output of a series of MVPs, if you persevere, is a MVMP — a minimally viable marketable product. But how do you go from MVP to MVMP?
Focus on Disprovability
No amount of experimentation can ever prove me right; a single experiment can prove me wrong.
Albert Einstein (paraphrased)
In empiricism, theories and hypotheses must be falsifiable. In other words, if you observe something that contradicts your hypothesis, you’ve proven that your hypothesis is false. And this lets us revise our hypothesis and run new tests!
When testing, your efforts should be spent (and your tests structured) to disprove your hypothesis. We focus on disprovability because the level of effort to falsify a hypothesis is far less than the effort to prove it true.
The discovery of black swans is commonly used to illustrate the concept of disprovability. Prior to 1697, Europeans believed that all swans were white. (Swans in Europe were white.) We can use this to make a falsifiable statement — “all swans are white.” If we focus our efforts on proving this statement is true, we need to observe all swans. However, if we focus on disprovability, we only need to find one non-white swan. (And in 1697, Willem de Vlamingh did just that while exploring parts of Australia.)
Disprovability allows us to explain, with confidence, how we know we are right - because we haven’t been able to prove ourselves wrong and we tried. Another benefit to focusing on disprovability is that we are avoiding confirmation bias.
Therefore, your Minimum Viable Product should be the least amount of work needed to disprove your hypothesis.1
Problem validation
If I had an hour to solve a problem, I’d spend 55 minutes thinking about the problem and 5 minutes thinking about the solution.
Albert Einstein
With a mindset focused on disprovability, the next step that is frequently skipped is understanding the problem. Our partners bring us a request to build a widget that they think will solve their problem. Or, being the problem-solving product people we are, we can quickly come up with ideas to fix this manual process or that work around.
By skipping the problem exploration, we end up with a sitcom solution. It’s that supporting character in a sitcom struggling to launch a business that sounds plausible but if we were to try it would fail. That’s because there was no effort to validate the problem this character’s business is meant to solve. (Barely paraphrased from Paul Graham.)
You make money in real estate when you buy the property. In product development, you build successful products by relentlessly exploring the problem. In organizations where solutions are brought to us, we need to be very adept at guiding our partners to step back and search for root causes.
We need to constantly be asking ourselves:
- Is this really a problem?
- How do we know?
- Where did the identification of the problem come from?
- Is the problem we are solving for actually a symptom of something else?
- Is this problem worth solving?
- How do we know that?
- Can we quantify the impact of this problem in terms of lost time, money spent, or even customer frustration?
- Are current work-arounds painful enough that customers will embrace (ie. pay for) a solution?
So how does MVP fit with problem exploration?
First, take a Gemba (or genba, if you will) walk. Personally observe the situation. Take time to take it in. Spending a few hours to a day - even a week - is a minimally viable product that can be used to disprove a hypothesis. During our observation time, did we see the problem occur? If not, this may be all you need to decide it’s not really a problem or not worth solving if the occurrence is that rare. And the MVP you had to build was the effort to get to the genba and the time spent observing.
Second, as quickly as possible, quantify the problem. Can you determine frequency, lost time, or lost money? Ultimately, you want a rough estimate of the scope of the problem. Your MVP to quantify if this is really a problem is only the time spent looking at the data.
Third, as quickly as possible quantify the risk of not addressing the problem. How much could, or will, we lose if we don’t pursue a solution? Our effort to solve the problem will cost us something (time and money), will this be more or less than the cost of letting this problem continue? Your MVP to determine if this is a problem worth solving is simply the time spent looking at the data.
These are a few suggestions to highlight how little work, an MVP, is needed to determine if the problem exists and if it’s worth solving.
A skeptical and data-driven approach to problem exploration is invaluable. Within this context, take greater pride in work not done.
Solution validation (incremental)
If you’re not embarrassed by the first version of your product, you’ve launched too late.
Using MVPs to incrementally build and validate solutions is more widely used. When exploring solutions, minimally viable products initially need to simply test a hypothesis — not be a viable marketable product.
Consider the following approaches.
- Build a landing page for a product that does not exist, but act like it does. Then drive traffic to the site with Google Ads (or another ad platform) and track how many people click the ‘Buy Now’ button. (This method is spelled out in great detail in The 4-Hour Workweek by Tim Ferriss.)
- Introduce new elements to existing solutions. Use non-functional buttons or links that appear to solve a problem (or lead to a solution) and track how many people click on it.
- Do things that don’t scale. Make the fix appear to be automated by doing behind-the-scenes processes manually. Personally setup new users. Use a spreadsheet as your database — and manually manage it! Each of these is an MVP that provides learning and will guide your future iterations. (For the second time in this post, paraphrased from Paul Graham.)
- Build (ugly) prototypes with off-the-shelf parts. For electronics, leverage Arduino and/or Raspberry Pis to build disposable proofs of concepts. Then deploy it and observe how it does, or doesn’t, move you closer to a solution.
- Create mock-ups that simulate the experience. If the solution needs to have specific dimensions (size and weight), tape together cardboard (from your Amazon orders) and fill it with weights. Then ask your customer to hold it as if it was a finished product. You’ll learn if this is an acceptable form factor.
Keep in mind that if one of these MVP tests fails, that’s a good thing! Now you know what doesn’t work. By persevering, pivoting to a different approach, and stringing together a series of the Build-Measure-Learn cycles, the end result is a Minimum Viable Marketable Product.
Conclusion
While the disdain is more likely to occur in larger organizations, mis-using MVP crosses all sizes and stages of companies - from startups to Fortune 100. This mis-use leads to risky and costly solutions to problems that may not even exist — or be worth the effort. Instead, we owe it to ourselves and our customers to reclaim the MVP approach to making our world better.
FOOTNOTES
1There’s probably an entire blog post on balancing statistical confidence in the outcome of your tests versus expense and the need to move forward to the next Build-Measure-Learn cycle.