Customer feedback is one of the most valuable tools we have for improving our products. Without leveraging our customers’ input to guide our product decisions and roadmap, it’s impossible to build a truly customer-centric product. As useful as those nuggets of user feedback can be, you have to be careful not to immediately react to any single piece of feedback. If you do, there’s a good chance you’ll wind up chasing red herrings and spending time on the wrong things. Regardless of who or where it comes from, feedback should not be considered truth or fact; it’s simply a single piece of evidence to help in our future decision making. In order to avoid overreacting to feedback, we need to first ensure that we’ve thoroughly validated it. In this article, we’ll explore the key questions your team should ask itself when validating whether or not your users’ feedback is worth acting on. 

Questions to ask when reviewing feedback 

Who does it come from?

Everybody has opinions, but the reality is that not every user’s opinion is equal. Often what matters more than what the feedback is, is who that feedback comes from. To begin evaluating the feedback, consider the type of user who submitted it and the potential associated revenue impact. In general, feedback originating from paid users should take priority over free users. Similarly, a recommendation from one of your highest paying customers typically takes precedence over a customer in your entry-level tier. While these principles apply as general rules of thumb, it’s important to be thoughtful about balancing them with the potential impact of each suggestion. For example, feedback from a collection of free users that would result in a sizable increase in conversion of those free users to paying accounts might ultimately be more valuable than feedback from a current paid user.   

How many users does it impact? 

One of the easiest ways to understand the potential impact of specific feature requests is to track the frequency of these requests. If a majority of your users have the same need, that’s often a good sign that the feedback deserves  your attention. However, when it comes to prioritizing feedback, volume doesn’t always indicate priority. Keep your eyes on the prize: if an update doesn’t improve your product’s key metrics of success or business goals, then it might not be worth your time, despite how many people want it. As a general rule of thumb, think of volume as an indicator that something is worth exploring, rather than a catalyst for taking action to address that feedback.

Is it a “nice to have” or “need to have”?

So you’ve decided that some feedback you received justifies your attention. Now the tough part:  deciding whether this work should be prioritized for development now or if it’s simply “nice to have” and can be addressed later. You need to ask yourself about the nature of the feedback. Is it a quality of life improvement? A request for something brand new? An experience-breaker or complete blocker for the user? Understanding the urgency of a piece of feedback is critical in helping your team make sense of how feedback should be prioritized. 

How much effort will it take to address? 

Estimating the cost required to successfully address a piece of user feedback is one of the most difficult and important steps of the prioritization process. It determines whether or not a project fits into your current product strategy. How many full time employee hours will be required to satisfy the user’s need? What’s the opportunity cost of working on this item instead of another? Even when every other variable indicates that a project should be prioritized, a high time and cost requirement  can single-handedly prohibit your team from addressing the feedback.  

Does it align with our current product priorities? 

A good product team will maintain a product roadmap that is flexible enough to adapt to the various changes it encounters, but focused enough to execute on the market opportunity that the product is designed to capture. Some feedback will be urgent, pervasive, and impactful enough to merit a change of plans. Before clearing space on your roadmap, though, you’ll need to carefully consider your company’s current priorities to determine the opportunity cost of prioritizing user feedback over other items that were previously in your plans. One of the biggest problems that arises from overreacting to product feedback is the whiplash that teams experience jumping back and forth from their roadmap to newly prioritized work that came about from a single piece of user feedback. It’s natural to want to prioritize a feature that the sales team says it needs in order to close a specific deal, but keep in mind that there are repercussions beyond just working on one thing instead of another. A team that is constantly chasing each shiny new object is one full of frustrated, exhausted members.

Prioritization Frameworks to Consider

For newer products, asking the questions above is often enough to jumpstart the prioritization process. However, as teams grow and products mature, you’ll generally see them formalize this prioritization process into an codified, repeatable system. Thankfully, there are a handful of frameworks to help teams prioritize user and product feedback in a strategic way. While the ideal system for prioritizing projects may vary across teams, all are generally trying to balance the following 4 core concepts: 

  • Impact
  • Pervasiveness
  • Urgency
  • Effort

Although there are several popular frameworks that use these concepts, let’s take a look at one in particular – the RICE Scoring Model, a prioritization framework popularized by Intercom.

The RICE Scoring Model

(Source: Intercom)

RICE is a scoring system to prioritize a set of feature ideas based on actual product metrics, all while taking specific company goals into account. RICE (Reach, Impact, Confidence, and Effort) is an acronym and each of the letters stands for a factor used to estimate the priority of a feature. Each feature’s RICE score is then compared against the others in consideration to determine relative priority. Let’s explore how each factor is calculated:

Reach   

Reach estimates how many users may be affected by a specific feature within a given period. It is measured in the number of people/events per time period (for example, per month or quarter) and should be based on actual product metrics (rather than gut estimates) whenever possible. Let’s look at an example scenario in which a team is trying to prioritize between 3 different projects: 

Project 1: You are planning an update to a feature that is used by 1,000 users per month on average, so the reach is 1,000 x 3 = 3,000 users per quarter

Project 2: An update to a feature that only 40% of your 2,200 MAU use regularly would result in a reach of .4 x 2200 x 3 = 2,640 users per quarter. 

Project 3: A new feature that will have a one-time effect on all 2,200 MAU would result in a reach of 2200 x 3 = 6,600 users per quarter.

Impact

Impact is used to measure how much a product change supports a company or product specific goal like “increase customer satisfaction” or “reduce churn”. That goal will vary from one team to another and might change over time. Impact is notoriously difficult to precisely measure, which is why Intercom has developed a common-sense scale to estimate the potential impact a feature can have towards specific goals. The scale ranges from 3 for “maximal impact”, 2 for “high impact”, 1 for “medium impact”, .5 for “low impact”, and .25 “minimal impact”. These numbers are eventually multiplied into the final score in order to scale it up or down for potential impact. 

Confidence

The Confidence score evaluates your confidence in the other estimates in your RICE calculations, allowing you to account for the balance of data-driven accuracy and gut intuition that many project estimates rely on. Confidence is a percentage and uses a multi-choice scale to ease the decision making process.

  • 100% is considered “high confidence”. For example, when data exists to support reach and impact, and engineering is confident in their estimate of the effort required.
  • 80% is “medium confidence”. For example, when data exists to support reach and effort scores, and the potential impact is based on anecdotal user feedback, but not quantitative data. 
  • 50% is “low”. For example, when effort may be higher than estimated, reach cannot be accurately determined, and the impact score is not supported by quantitative data.
  • Anything below 50% is a “moonshot” and probably requires additional research before further consideration.

Effort

The effort score is used to calculate the amount of time required from all the involved members of your team (generally from product, engineering, and design). Effort is estimated by calculating the sum of “person-months”, the amount of time a team member can do in a month, required to complete a project. If you think of RICE as a cost-benefit analysis, consider effort the associated costs, which are preferably minimal. Effort is generally a rough estimate, so you should use whole numbers (or .5 for anything well under a month). 

Example: A project that estimates one week of product planning, 1-2 weeks for design, and 2-4 weeks for engineering receives an effort score of 2 person-months.

Using RICE to Prioritize Effectively

Once you have generated a score for each of the areas, you can calculate your RICE score using this simple formula: Reach x Impact x Confidence / Effort. The kind folks at Intercom built this spreadsheet to automatically calculate your scores for you. Once you have a score for each of the projects you are considering, compare them against each other to help understand which ones to prioritize, which ones to defer to a later date, or which ones simply aren’t worth the effort. 

RICE scores should not serve as the be-all and end-all; certain situations will arise in which gut instinct prevails or when priorities must shift to meet the needs of a specific customer. RICE should simply serve as one of the frameworks you use to more objectively prioritize your product roadmap and support the decision-making process. RICE requires existing metrics in order to work effectively, so it may not be the best tool for MVPs or early-stage products. 

Wrap Up

Product teams should be grateful for any user feedback they receive, but that doesn’t mean that it should be acted on immediately, or even at all. Any solid feedback collection process must be followed by a thorough feedback analysis cycle before any action is taken. The methods you choose will vary across industry, growth stage, or company size, so take time to establish a process that works best for you. The RICE Scoring Method might work better for more mature products, so be sure to consider other frameworks until you find the best fit. Remember to be flexible within any process you use. These frameworks will provide objectivity in your prioritization decisions, but no process is valuable when not paired with the intuition of an empathetic product leader.