This is a quick little ramble about the scope of minimum viable products (MVPs), their schedule for creation and delivery, and finally the interplay of those things on developers.
The entire purpose of an MVP is to create the simplest, cheapest expression of a business idea possible to test if the market wants the thing you’re making.
Ways to succeed:
Ways to screw up:
Arguably, if you aren’t actually testing a value proposition, you’re making a prototype and not an MVP. I’m unsure if the distinction is worth working into the daily lexicon of your org, but I suspect it may well be.
There are three parts to an MVP, right there in the name:
Note that, dear reader, if you are an engineer, the only part of that you really have control over is the “product” bit…”minimal” is the job of your product folks (hopefully with input from y’all!) and “viable” is entirely in the hands of the users (a sad fact bemoaned by both product and engineering.
Anyways, the scope of the MVP should contain:
Things that commonly get included in scope but ideally shouldn’t be:
It’s been said that you should be a little embarrassed about the state of an MVP—and that’s true. A good MVP follows the same sort of domain rule, I’ve found, as courting somebody:
If they’ve decided to like you or give you a chance, you probably can’t do anything wrong. If they’ve decided not to, you probably can’t do anything right.
That being the case, we build the cheapest, simplest MVPs we can and see if we connect with our users on a fundamental level—and if we don’t, it ain’t gonna matter how many hours we spend making a perfect design or days we spend shaving time off a database query.
So, having decided to embark on making an MVP of a particular scope, your org needs to schedule time to work on it.
In theory, this should just be the time required to implement the ideas of the MVP modulo any time constraints imposed by the domain you’re testing. If there’s a big trade show or something coming up, you might want to have the MVP available for testing. If there’s a personnel change anticipated soon, you might want that MVP done before that person leaves (if, say, they’re the only dev that can implement it quickly or whatever).
My hunch is probably that it shouldn’t take more than a month of calendar time to create an MVP good enough to get customer feedback on.
(If you point at some kind of special-sauce AI service or robot thing, I’ll just answer that you are testing the business value to the customer and not the implementation, so just mechanical turk it or hide the operator or whatever so the user is getting a functionally-equivalent version to what the engineers will eventually scale.)
Unfortunately, even after taking those things into account, something can still go wrong.
There is great temptation, especially in orgs that have some heavy political investment in product design and “quality”, to refuse to ship an MVP until it meets the standards of the org.
This seems to manifest as:
All of these change the predictability of the ship time for the MVP. Further, they increase the expense of the MVP (in terms of engineering and design man-hours, and also in friction as people argue over things like dialog color and whatnot).
This undermines the intent of opting for an MVP instead of a prototype.
If you ask for more features or more flexibility in your MVP process, you need more time.
If you ask for reliable and predictable delivery schedules, you need to stamp out variability due to changing scope.
There is no way around this.
You can ask your developers to take up the slack by working extra hours and putting aside the cognitive dissonance (often non-trivial) of observing a changing situation while being required to assert that the schedule predictability is not affected.
Doing this will antagonize your engineers, least of all because seemingly simple variations/additions can absolutely balloon the implementation complexity of an MVP. Due to the current state of the art in computing, we can’t—for example—just add a text-box to “search for the right thing” without a pretty good idea of what the business means by the right thing. We similarly—from a previous gig—can’t just “add a sparkline” without considering where that data comes from, where that data is stored, what process is used to summarize the data, and how we should handle cases where the data hasn’t been collected.
In healthy orgs, this is inconvenient. In unhealthy orgs, this is indistinguishable from normal sandbagging and can trigger secondary and tertiary political problems perhaps leading in irreversible personnel changes.
MVPs are meant to be concise and cheap tests of an idea, delivered on-time. Changing their scope makes them more expensive and risks any promised timelines.
You can ignore this, but then your developers will pay the price.