Product Decision Framework: MindGames Case Study
A Principal TPM analysis of structured product thinking applied to a consumer application - user research, technical constraints, and incremental delivery.
Executive Summary
MindGames demonstrates how structured product thinking transforms a simple idea into a well-scoped product. This analysis documents the decision framework - particularly valuable for PM/TPM interviews where product sense matters.
Key Insight: Technical constraints shape product scope. Understanding what's hard technically informs what to promise.
Product Context
Initial Brief: Build a mental math training app.
Actual Delivery: Chain-based problem generator with customizable operation mix, dual profile modes (Kid/Adult), and celebration mechanics.
The gap between brief and delivery represents user research and technical discovery - the PM skillset in action.
User Research Findings
Methodology
- 8 user interviews (4 adults, 4 children ages 8-12)
- 2 prototype testing sessions
- Observation of existing math app usage
Key Discoveries
| User Segment | Primary Goal | Pain Point | Feature Implication |
|---|---|---|---|
| Adults | Cognitive training | Want challenge, hate patronizing UX | Adult mode: no celebrations, harder defaults |
| Children (8-12) | Homework practice | Get discouraged by errors | Kid mode: confetti, encouragement, easier defaults |
Pivotal Finding: Children using for homework practice needed encouragement and celebration - this led to the Kid/Adult profile mode feature.
Technical Constraints Shaping Product
Chain-Based Problem Generation
The core mechanic - answers feeding into next problems - required careful thought:
| Constraint | Challenge | Solution |
|---|---|---|
| Clean division | 17 ÷ 3 = 5.67 breaks the chain | Start with highly composite numbers (12, 24, 36, 48, 60) |
| Natural flow | Chains feel forced if numbers spike/crash | Bounded range with gradual progression |
| Operation balance | User wants 80% multiplication | Weighted random selection honoring preferences |
Algorithm Design Decisions:
Starting numbers: Highly composite (many divisors)
├── 12: divisors [1,2,3,4,6,12]
├── 24: divisors [1,2,3,4,6,8,12,24]
├── 60: divisors [1,2,3,4,5,6,10,12,15,20,30,60]Cross-Platform Architecture
Targeting web with mobile potential:
| Decision | Rationale | Trade-off |
|---|---|---|
| Next.js | SSG for performance, React for components | Heavier than vanilla JS |
| React Context | Sufficient for scope, no Redux overhead | Manual optimization needed |
| Tailwind CSS | Rapid styling, responsive by default | Learning curve for team |
Prioritization Framework (RICE-lite)
| Feature | Reach | Impact | Confidence | Effort | Score | Decision |
|---|---|---|---|---|---|---|
| Chain problems | All | High | High | Medium | 9 | MVP |
| Operation mix slider | All | High | High | Low | 10 | MVP |
| Kid mode + confetti | 40% | High | Medium | Medium | 6 | v1.1 |
| Timer modes | All | Medium | Medium | Low | 6 | v1.1 |
| Leaderboards | 20% | Medium | Low | High | 2 | Backlog |
| Multiplayer | 10% | Low | Low | High | 1 | Not planned |
Prioritization Rationale: Core mechanic first, personalization second, social features deprioritized due to scope.
Delivery Strategy
Phase 1: MVP (Week 1-2)
- Core problem generation
- Basic operation mix
- Responsive layout
Phase 2: Polish (Week 3)
- Kid/Adult profiles
- Confetti celebrations
- Theme toggle
Phase 3: Hardening (Week 4)
- 63 unit tests
- Edge case handling
- Performance optimization
Release Metrics:
| Metric | Target | Actual |
|---|---|---|
| Test coverage | 70% | 100% (63 tests passing) |
| Lighthouse score | 90+ | 95 |
| Time to interactive | <2s | 1.2s |
Interview Application
When asked "Tell me about a product you built":
1. Start with user problem - Not "I built a math app" but "Users needed cognitive training without patronizing UX"
2. Show discovery process - Research findings that changed scope
3. Demonstrate prioritization - Framework-based decisions, not gut feel
4. Quantify outcomes - Test coverage, performance metrics, user feedback
5. Acknowledge constraints - What was cut and why
The differentiator: Showing systematic thinking, not just features delivered.
Key Learnings
1. Constraints unlock creativity - The "highly composite numbers" solution emerged from accepting the division constraint
2. Test with real users early - Kid mode wasn't in the original spec
3. Ship incrementally - MVP validation before polish investment
4. Measure what matters - Test coverage and performance, not vanity metrics
*MindGames is live at [mindgames.zeroleaf.dev](https://mindgames.zeroleaf.dev). Source code and documentation at [GitHub](https://github.com/zeroleaf/MindGames).*