Read enough product post-mortems and the same pattern starts showing up.
The vocabulary changes — "ahead of its time", "wrong execution", "team didn't sell it well", "bad market timing", "wrong audience". The underlying cause doesn't. Almost every product that fails fails for the same reason: the team built something users didn't want enough to change their behaviour for.
Everything else is downstream of that.
What "didn't want enough" actually means
There are usually three states a product can be in:
Users don't know they want it. Solvable with marketing and onboarding. Not the cause of most flops.
Users want it a little. They sign up, click around, abandon. The product solves a real but minor problem. It loses to the existing alternative because switching costs more than the gain. This is the largest flop category and the hardest to admit to.
Users actually want it. They activate. They retain. They tell other people. The growth shape is fundamentally different and obvious from the data within weeks.
The teams that flop are almost always in state two and convince themselves they're in state one. They believe more marketing, better positioning, a slicker onboarding will move the metric. Sometimes it does, marginally. Almost never enough to change the trajectory.
Why teams keep getting this wrong
Three habits that produce the same outcome reliably:
Validation shopping. Talking to users to confirm what the team already wants to build. Cherry-picking the encouraging signals. Discounting the lukewarm ones. The result is research that produces a unanimous yes for a product that the market eventually rejects unanimously. The information was there. The team didn't want to see it.
Confusing intent with behaviour. Users say they'd pay for it. They sign up to a waitlist. They give the demo positive feedback. Then nothing. Stated intent is one of the weakest signals in product. Behaviour is the only thing that actually predicts whether a product will work — and behaviour can only come from a product that's real enough to use.
Optimising the wrong loop. Once a product is shipping, the team has data. The data has problems. The team optimises against the problems and the metrics inch up. That feels like progress. Underneath, the fundamental answer hasn't changed — users still don't want the product enough — and the optimisation work is sandcastle work. You can polish a step-three drop-off forever and never fix the fact that the people getting to step three were never going to stay anyway.
What an honest post-mortem looks like
The useful question isn't "what went wrong?". It's "did users want this?".
Look at the smallest, earliest cohort. Not the headline numbers — the first 50 users who actually used the product. What did they do? How long did they stay? What did they tell other people about it? If that cohort wasn't excited, the product was always going to be in trouble. The later cohorts only ever look like the early ones, slightly worse.
Look at the qualitative, not the quantitative. The metrics tell you whether something worked. They don't tell you why. The why is in the conversations — the support tickets, the user interviews, the cancellation reasons. Read them. They're almost always saying the same thing in different words: this didn't solve a problem big enough to change what I do.
Be willing to call the actual cause. The team narrative will gravitate toward execution failures, market conditions, competitor moves. Some of that is real. Most of it is comfort. The comfort version of the post-mortem doesn't help the next product.
What to do about it
Two practical shifts that prevent the same outcome next time.
Build less, ship faster, kill quicker. The longer the team builds before shipping, the more committed everyone gets to the answer being yes. Shorter cycles produce honester post-mortems because there's less ego and money tied up in any individual bet.
Define want-it-enough before you start. What does behaviour look like if users genuinely want this? Activation rate, retention curve, share rate, the specific actions that would prove the love. Lock those in pre-launch. Then if the data doesn't get there, you have a contract with yourself that says: this isn't the one. Without a pre-defined contract, every result can be rationalised as "promising".
The shift
Most flops are the same flop. Different domain, same cause: users didn't want it enough to change their behaviour.
The teams that get fewer flops aren't smarter. They're better at noticing when the answer is no and having the discipline to act on it before they've spent another year building.
If you're shipping an MVP that's a real test, not a pre-cooked answer, you've already won most of this fight. And discovery as a habit is what catches the no early enough to do something about it.
