📚 "The Mom Test"
29 cards · shared by Product Management
Most customer interviews feel productive. They're not. Someone nods, says "that sounds interesting," and you walk away thinking you've validated something. Rob Fitzpatrick has a name for that. The Mom Test is the book that explains why founders keep building the wrong thing with total certainty. Short enough to read in an afternoon. Hard enough to actually apply that most people don't. This deck is 30 cards covering every idea in the book that actually matters in the room. The three rules. How to separate a fact from a polite opinion. Why bad interviews manufacture false confidence in ideas that haven't earned it yet, rather than just missing the signal. Commitment signals get ranked weakest to strongest. (Because "I'd probably use something like that" is nearly worthless, and most people forget this the moment a warm conversation ends.) Anti-patterns get their own section: the compliment sandwich, premature convergence, talking to the right job title but the wrong person in that role. These are the traps that survive a first reading of the book. Cards are tagged by theme. Principles, tactics, anti-patterns, signals. For product managers, founders and UX researchers who've run bad discovery and would prefer fewer of those.
How do you identify a truly important problem in a customer interview?
Customers bring it up without prompting, speak about it with emotion or specific detail, and have already tried to solve it. Unprompted mention is the strongest shortcut to signal.
How many customer conversations does The Mom Test suggest before drawing conclusions about a segment?
Enough to reach thematic saturation — typically 5–10 interviews with similar customers in the same segment. Single conversations are anecdotes; patterns across a segment are evidence.
What are the three rules of The Mom Test?
Talk about their life, not your idea. Ask about specifics in the past, not hypotheticals. Talk less, listen more.
What does 'talking to the wrong people' look like in practice — and why does it corrupt strategy?
Interviewing champions (enthusiasts who love your idea) instead of decision-makers or typical users produces skewed data — you get enthusiasm without budget authority, or edge-case needs that don't generalise.
What does The Mom Test say about note-taking during interviews — and who should do it?
Have a dedicated note-taker so the interviewer can stay present and follow threads. Notes should capture exact quotes and specific facts — not summaries or interpretations — to preserve the raw signal for later analysis.
What does it mean to 'anchor on a segment' before starting customer interviews?
Define who you're talking to before you start — same role, same context, same problem domain. Mixing segments makes it impossible to distinguish signal from noise because different people have different problems.
What four common confusions cause teams to misread customer contact?
Politeness mistaken for demand, feature requests mistaken for buying intent, curiosity mistaken for urgency, and stated preferences mistaken for actual behaviour.
What is 'premature convergence' in customer discovery — and why is it a risk?
Stopping interviews too early because the first few conversations seem consistent — before you've stress-tested across the full segment or found disconfirming cases. It mistakes early pattern-matching for validated insight.
What is a 'compliment sandwich' in customer interviews, and why is it dangerous?
When customers soften criticism with praise — 'I love the concept, but...' — founders remember the praise and discount the criticism. The compliment is noise; the 'but' is the signal.
What is the 'Mom Test' — and where does the name come from?
The Mom Test is a set of rules for asking questions that even your mom can't lie to you about — by making conversations about the customer's life rather than your idea, you remove the social pressure to be encouraging.
What is the 'facts vs opinions' filter in Mom Test interviews?
Facts = observable past behaviour (what they did, paid, used). Opinions = speculation (what they think, want, or might do). Only facts should influence product decisions.
What is the 'false confidence' trap in bad customer interviews?
Bad conversations don't just fail to give signal — they actively manufacture reassurance. Compliments and vague enthusiasm feel like validation but are usually social niceties masking the absence of real pain.
What is the 'three whys' technique, and how does it complement Mom Test interviewing?
Asking 'why' three times in response to a stated problem surfaces the root cause beneath the symptom — each layer moves you closer to the real job-to-be-done and further from the surface complaint.
What is the 'zoom in' move in customer interviews, and when do you use it?
When an answer is vague, follow up with a question about a specific past instance. 'It's a big problem' → 'What happened the last time it came up?' 'We do it manually' → 'Walk me through that step by step.' Vagueness is a prompt to go deeper, not a signal to move on.
What is the correct response when a customer says 'I would definitely pay for that'?
Treat it as weak signal only. Follow up with: 'Have you tried to solve this before?' and 'What did that cost?' Stated willingness to pay is cheap; prior spending is expensive and therefore credible.
What is the danger of over-indexing on a single passionate customer?
One outlier can distort strategy. Valid signal requires repeated patterns across a segment — similar pain, similar language, similar failed attempts, similar buying triggers.
What is the difference between a 'learning goal' and a 'validation goal' in customer conversations — and which is correct?
A learning goal means you enter the conversation to understand their world. A validation goal means you enter to confirm your hypothesis. Only learning goals produce reliable data — validation goals introduce confirmation bias before the interview starts.
What is the difference between a 'warm' and 'cold' customer conversation — and which produces better data?
Warm conversations (via intros) reduce the pressure to pitch because context is pre-established. Cold outreach creates a sales dynamic by default. Warm intros consistently produce more honest, exploratory conversations.
What makes a question 'strong' in a Mom Test-style interview?
It forces concrete recall: 'When did this last happen?' 'What did you do?' 'What did it cost you?' If they answer vaguely, keep digging — vagueness is a prompt to zoom in, not a signal to move on.
What makes past behaviour more reliable than future intent in customer interviews?
People accurately recall what they've done but poorly predict what they'll do — so asking 'what did you try?' surfaces real evidence; asking 'would you use this?' surfaces social performance.
What question reveals whether a problem is urgent, not just real?
Ask what they've already spent — time, money, or effort — trying to solve it. A problem with no prior attempts is probably not painful enough to buy a solution for.
What reframe does The Mom Test apply to customer discovery?
It repositions discovery from a research activity into a bias-control mechanism — the goal is not to hear encouraging things, but to design conversations that make self-deception harder.
What should you capture after every customer interview to treat it as decision input?
What you learned, whether the problem is real, how severe it is, whether the segment is worth pursuing, and which assumption changed.
What signals constitute genuine commitment in a customer interview, ranked weakest to strongest?
Time (agreeing to a trial or follow-up meeting), Effort (sharing data, process detail, or workflow access), Reputation (introducing you to others), Money (pre-ordering or allocating budget). Enthusiasm without any of these is noise.
What three strategic benefits does applying The Mom Test deliver for a product leader?
Sharper prioritisation (kills ideas with no real pain), better team alignment (anchors decisions in observable behaviour), and stronger system learning (repeatable assumption-testing over time).
Why do customers 'lie' in customer interviews — and why does it matter?
Not from deception, but from politeness, desire to be helpful, and inability to predict their own future behaviour. This is why self-reported intent is structurally unreliable regardless of how honest the customer seems.
Why does introducing your solution — even without a hard pitch — corrupt interview data?
The moment you mention your solution, the conversation shifts from truth-seeking to validation-seeking. People react to your idea rather than describing their reality, and every subsequent answer is filtered through politeness. Keep discovery and selling in separate conversations.
Why is a feature request not a buying signal?
Feature requests describe a customer's imagined solution, not the underlying pain. The pain may be real, but the feature they're requesting may be the wrong fix — and their willingness to request it doesn't mean willingness to pay.
Why should you avoid asking customers to rank or rate your ideas?
Ranking and rating prompts people to engage with your framing rather than their own reality — it's still a hypothetical, just dressed up as quantitative. The result is structured noise.
Tags
Want to build your own study deck?
Import from CSV, add cards manually, and let spaced repetition do the rest.
Create a free account →