RICE Scoring: The Data-Driven Prioritization Framework
Master the RICE scoring model to prioritize features based on Reach, Impact, Confidence, and Effort. Complete guide with calculator and examples.

Product Leader Academy
PM Education
What is RICE Scoring?
RICE is a quantitative prioritization framework developed by Intercom to help product teams make objective decisions about what to build next. It scores each initiative based on four factors:
- Reach: How many users will this impact?
- Impact: How much will it impact each user?
- Confidence: How certain are we about our estimates?
- Effort: How much work will this take?
The formula: RICE Score = (Reach × Impact × Confidence) / Effort
The Four RICE Factors
Reach
Reach measures how many people will be affected by your project within a specific time period. Use actual data when possible.
How to measure Reach:
- Number of users per quarter/month
- Percentage of user base affected
- Transaction or event count
Example: A checkout flow improvement might reach 10,000 users per month who complete purchases.
Tips:
- Be specific about the timeframe
- Use historical data from analytics
- Count unique users, not total interactions
Impact
Impact measures how much the project will affect each user. Use a standardized scale:
| Score | Description | Example |
|---|---|---|
| 3 | Massive impact | Core workflow transformation |
| 2 | High impact | Significant improvement |
| 1 | Medium impact | Noticeable improvement |
| 0.5 | Low impact | Minor enhancement |
| 0.25 | Minimal impact | Barely noticeable |
Tip: Be conservative. Most features have medium (1) or low (0.5) impact.
Confidence
Confidence reflects how certain you are about your Reach and Impact estimates. Express as a percentage:
| Confidence | Description | Evidence Required |
|---|---|---|
| 100% | High | Strong data, proven patterns |
| 80% | Medium | Some data, reasonable assumptions |
| 50% | Low | Gut feel, limited data |
Rule of thumb: If you're guessing, use 50%. If you have data, you can go higher.
Effort
Effort is the total amount of work required, typically measured in person-months or story points.
Include in your estimate:
- Product/design work
- Engineering effort
- QA and testing
- Documentation
Example: A feature requiring 2 weeks of design and 4 weeks of engineering = 1.5 person-months.
Calculating RICE Scores
The Formula
RICE Score = (Reach × Impact × Confidence) / Effort
Example Calculation
Feature: Auto-save for documents
| Factor | Value | Reasoning |
|---|---|---|
| Reach | 5,000 users/month | Based on active editor users |
| Impact | 2 (High) | Prevents data loss frustration |
| Confidence | 80% | Good usage data, some assumptions |
| Effort | 0.5 person-months | Well-scoped, clear requirements |
RICE Score = (5,000 × 2 × 0.8) / 0.5 = 16,000
Step-by-Step Implementation
Step 1: Create Your Scoring Spreadsheet
Set up columns for:
- Initiative name
- Reach (number)
- Impact (0.25-3)
- Confidence (50-100%)
- Effort (person-months)
- RICE Score (calculated)
Step 2: Gather Data
For each initiative:
- Pull analytics for Reach estimates
- Reference past projects for Impact calibration
- Document your Confidence assumptions
- Get engineering estimates for Effort
Step 3: Score All Initiatives
Score your entire backlog using consistent criteria. Don't skip items—the goal is relative comparison.
Step 4: Rank and Discuss
Sort by RICE score, but use judgment:
- Are top-scored items strategically aligned?
- Any critical bugs that need immediate attention?
- Dependencies between initiatives?
Step 5: Iterate
RICE works best when you:
- Review scores quarterly
- Update Confidence after learning
- Calibrate based on actual outcomes
Best Practices
1. Standardize Across Teams
Create a shared rubric for Impact and Confidence. Different interpretations undermine the framework.
2. Document Your Assumptions
Record why you chose each value. This enables learning and recalibration.
3. Don't Optimize for RICE Alone
High RICE scores don't guarantee strategic fit. Use RICE as input, not gospel.
4. Compare Similar Initiatives
RICE works best for comparing like with like. Don't compare infrastructure work with UX improvements.
Common Mistakes to Avoid
- Inflating Impact - Be honest about incremental improvements
- 100% Confidence everywhere - That's overconfidence, not data
- Underestimating Effort - Include all work, not just coding
- Ignoring small Reach items - They might have massive Impact
RICE vs Other Frameworks
| Aspect | RICE | MoSCoW | ICE | Kano |
|---|---|---|---|---|
| Quantitative | ✅ Yes | ❌ No | ✅ Yes | ❌ No |
| Includes Confidence | ✅ Yes | ❌ No | ❌ No | ❌ No |
| Effort consideration | ✅ Yes | ❌ No | ❌ No | ❌ No |
| Best for | Backlog scoring | Requirements | Quick scoring | Feature types |
When to Use RICE
Ideal scenarios:
- Quarterly roadmap planning
- Backlog grooming sessions
- Resource allocation decisions
- Stakeholder communication
Less ideal for:
- Emergency bug fixes
- Strategic bets with unknown reach
- Early-stage discovery
Template: RICE Scoring Session
Duration: 2 hours
- Prep (before meeting): Gather analytics data, engineering estimates
- Calibration (20 min): Align on Impact and Confidence scales
- Individual scoring (30 min): Each PM scores their initiatives
- Review & discuss (50 min): Compare scores, resolve outliers
- Finalize (20 min): Lock in priorities, document decisions
Conclusion
RICE scoring brings objectivity to product prioritization by forcing teams to quantify their assumptions. The Confidence factor is particularly valuable—it penalizes wishful thinking and rewards data-driven decisions.
Remember: RICE is a tool for better conversations, not a replacement for product judgment. Use it to surface trade-offs and facilitate alignment, then apply strategic context to make final decisions.
Want to practice RICE scoring with real scenarios? Join Product Leader Academy for hands-on frameworks training.
Tags
Related Articles
MoSCoW Prioritization: The Complete Guide for Product Managers
Learn how to use the MoSCoW method to prioritize product features and requirements effectively. Includes examples, templates, and best practices.
Value vs. Effort Matrix: Quick Prioritization Guide
Learn to use the 2x2 Value vs. Effort matrix for rapid feature prioritization. Identify quick wins and strategic investments.
Weighted Scoring Model: Objective Feature Prioritization
Build a weighted scoring system to objectively evaluate and prioritize product features. Includes templates and real-world examples.