Core frameworks Ryan uses in product work.
For a deep dive, see RICE: Simple Prioritization for Product Managers (Intercom).
Score = (Reach × Impact × Confidence) / Effort
Reach: Number of customers affected per time period (usually per quarter)
- Count unique customers, not page views or sessions
- Use actual data when possible, estimates when necessary
Impact: How much this improves the customer experience
- 3 = Massive impact
- 2 = High impact
- 1 = Medium impact
- 0.5 = Low impact
- 0.25 = Minimal impact
Confidence: How sure are we about Reach, Impact, and Effort?
- 100% = High confidence (strong data)
- 80% = Medium confidence (some data, some assumptions)
- 50% = Low confidence (mostly assumptions)
Effort: Person-months of work
- Include all team members (PM, Eng, Design, QA)
- Account for complexity and unknowns
- Minimum score: 0.5 (less than 2 weeks)
Proposed by Lenny Rachitsky for complex environments. See Introducing DRICE.
Score = (Reach × Impact × Confidence) / (Effort × Dependencies × Risk)
Dependencies: External dependencies that could block or delay
- 0 = No external dependencies
- 1 = 1-2 dependencies, low risk
- 2 = 3-5 dependencies, medium risk
- 3 = 6+ dependencies or high-risk dependencies
Risk: Technical, product, or operational risk
- 1 = Low risk (proven approach, reversible)
- 2 = Medium risk (some unknowns, mostly reversible)
- 3 = High risk (many unknowns, hard to reverse)
- 4 = Critical risk (could cause major issues)
| Scenario | Use | Reason |
|---|---|---|
| Simple, low-risk feature | RICE | Keep it simple |
| Cross-team collaboration | DRICE | Make dependencies visible |
| High uncertainty | DRICE | Surface risks explicitly |
| Hard to reverse | DRICE | Account for commitment |
| Proven, incremental work | RICE | Avoid overhead |
For a foundational intro, see DomainDrivenDesign by Martin Fowler.
A bounded context is a boundary within which a particular domain model is defined and applicable.
Key Principles:
- Each context has its own ubiquitous language
- Explicit boundaries prevent model confusion
- Clear ownership for each context
- Contexts communicate through well-defined interfaces
Identifying Bounded Contexts:
- Look for different meanings of the same term
- Find natural organizational boundaries
- Identify different data ownership
- Notice where language changes
Example:
- "Order" in Sales context = customer purchase
- "Order" in Fulfillment context = picking and shipping
- "Order" in Finance context = revenue recognition
Use the same terms in code, docs, and conversations.
Rules:
- Terms must be precise and unambiguous within a context
- Avoid technical jargon when domain terms exist
- Update language when the model evolves
- Document terms in context documentation
The consortium standard for knowledge management. See the KCS v6 Methodology.
Solve and Capture
- Capture knowledge in the flow of work
- Don't wait for "perfect" documentation
- Make it easy to contribute
Structure and Reuse
- Organize knowledge for findability
- Link related content
- Update based on usage
Improve and Evolve
- Knowledge is never "done"
- Update based on feedback
- Retire outdated content
High-Quality Signal:
- Reproducible (can be verified)
- Specific (concrete examples)
- Frequent (pattern, not one-off)
- Severe (meaningful impact)
- Evidence-based (data, not opinion)
Low-Quality Signal:
- One-off occurrence
- Vague or general
- Opinion without evidence
- Cannot be reproduced
- Contradicts other signals
A foundational tool for empathy. See Journey Mapping 101 by Nielsen Norman Group.
Awareness: Customer discovers the product Evaluation: Customer assesses fit Onboarding: Customer gets started Adoption: Customer uses core features Expansion: Customer uses advanced features Renewal: Customer decides to continue Advocacy: Customer recommends to others
For each stage, identify:
- What's the customer trying to do?
- What's getting in their way?
- How severe is the friction?
- How often does this happen?
- What's the impact if we fix it?
Some friction points affect trust more than others:
- Data accuracy issues
- Unexpected behavior
- Security or privacy concerns
- Reliability problems
- Unclear or misleading communication
This is not a branded framework. It is the working pattern I use when an AI feature moves from demo mode into live use.
- Real inputs: Use live or representative conversations, requests, and edge cases
- Escalation behavior: Define when the system should hand off instead of improvising
- Failure visibility: Make sure silent failure is visible to users and operators
- Quality thresholds: Agree upfront on what good enough looks like
- Fallback paths: Keep a reviewable, reversible path when the model is wrong
- Learning loop: Feed edge cases, incident patterns, and docs gaps back into the system
- Do not evaluate only on happy-path prompts
- Keep routing, handoff, and refusal behavior explicit
- Track the difference between an impressive answer and a useful answer
- Treat adoption as a trust problem as much as a feature problem
- Update prompts, policies, and knowledge artifacts together
- over-confident outputs that should have escalated
- demos that collapse in live traffic
- policy drift between teams
- AI features that feel clever but never become dependable
From Marty Cagan (SVPG). See The Four Big Risks.
Every product faces four types of risk:
Value Risk: Will customers buy it or choose to use it?
- Validate: Customer interviews, prototypes, usage data
Usability Risk: Can customers figure out how to use it?
- Validate: Usability testing, onboarding metrics, support tickets
Feasibility Risk: Can we build it with available technology and resources?
- Validate: Technical spikes, prototypes, architecture review
Business Viability Risk: Does this work for our business?
- Validate: Financial modeling, strategic alignment, operational assessment