Most prioritisation meetings are won by whoever cared the most.

The exec who has a strong feeling about feature X. The senior engineer who finds feature Y intellectually interesting. The sales rep who closed two deals on the promise of feature Z. Each makes the case for their thing, and the loudest, most senior, or most recently-frustrated voice usually wins.

That's not prioritisation. That's politics with a backlog.

The teams that ship the right features have a different default — they let the data make the case, and treat the loudest voice as one input among many.

Why opinion-driven prioritisation fails

Three predictable failure modes:

The voices in the room aren't a representative sample of users. The exec is one user (often not even). Sales has selection bias toward the deals they almost closed. Engineering has bias toward problems that are technically interesting. None of these correlate well with what the actual user base needs. The loudest voice is rarely the most representative.

Recency dominates. The thing someone heard about last week has more weight in the meeting than the pattern that's been quietly true for six months. Memory is short. Data is long.

Conviction beats evidence. A confident assertion sounds more compelling than "we don't have enough data to be sure". So the team consistently picks confident-but-wrong over uncertain-but-better-supported. That's how teams end up shipping the favoured idea instead of the right idea.

What "let the data pick" actually means

It doesn't mean the dashboard makes the decision. Data doesn't have intent. People do.

It means the prioritisation conversation starts from the data, not from opinions. Then opinions are weighed against what the data says, instead of the data being mined for evidence to support a pre-formed opinion.

A few practices that make this concrete:

Start every prioritisation meeting with the user evidence. What's been happening in the last 30 days? Which features got used? Which ones got abandoned? What patterns showed up in support tickets? Twenty minutes of this changes the next two hours of conversation.

Quantify the problem before you discuss the solution. "Activation dropped 8% over the last quarter" is a different conversation than "activation feels low". The first invites a precise discussion. The second invites speculation.

Make people show their evidence. When someone advocates for a feature, ask: what's the user signal that this matters? If the answer is a hunch, that's allowed — but flag it as a hunch and weight it accordingly. Hunches from experienced people are useful. Hunches dressed up as evidence are expensive.

Where opinion still matters

Data-led prioritisation isn't data-only. The data has limits.

It can't tell you what to build that doesn't exist yet. By definition, no user has ever used the feature you haven't built. You're going to use judgment about new bets, and that judgment includes opinions about where the market is going, what users will eventually want, what competitors will do. That's not vanity — that's strategy.

It can't tell you which problem matters more. You can have data on two different problems and still need a value call about which one to solve first. The call is informed by data. It's not made by data.

It can't tell you when to make a long bet. Some things — platform investments, infrastructure, brand — won't show up in next month's metrics. The data on these is always weak. That's where senior judgment is supposed to step in. The mistake is treating every decision as a long bet because the data is inconvenient.

The pattern: data-led for execution decisions, judgment-informed-by-data for strategic ones. Both have a role. Confusing which is which is where teams go wrong.

How to actually run this

Three habits that move the team from opinion-led to data-led:

Build a default that the data leads. The default — what happens when nobody's pushing — should be that the prioritisation reflects the evidence. Override that default deliberately when the strategic case is strong, and document why. Without a default, every decision is a free-for-all.

Make the user evidence accessible. If the support tickets are buried in a tool nobody opens, they don't inform decisions. Pull the patterns into the prioritisation meeting. Same for analytics. Same for user interviews. The team that has the evidence at hand uses it.

Reward changing your mind. Teams that punish people for being wrong end up with people defending their initial position regardless of the evidence. Teams that reward "I changed my mind because of this data" end up with sharper decisions. That cultural piece is the highest-leverage shift here.

The shift

Loudest voice is the cheapest signal you have. The data is more expensive to gather and more accurate. Run prioritisation against the more accurate signal whenever you can.

The senior person in the room still gets to override. They just have to do it with their eyes open — knowing that the evidence said one thing and they decided another, for reasons they can name.

That's a different team than one that just defers. It's also one that ships the right features.

If you're building a stakeholder communication system, data-led prioritisation is what makes the conversation about evidence instead of egos. And vanity metrics are how data-led prioritisation goes wrong when the data itself is the lie.