Release day: It’s what Product Managers live for – a chance to finally launch the features you’ve worked so hard on for what feels like forever. You’ve worked through all the steps (ideation, design, prototyping, development, and QA), and it’s now time to launch and announce it to your users. You write the announcement and your engineers push to production. But now what?

Most product teams will spend the next few hours staring somewhat mindlessly at their analytics platform (Fullstory is our guilty pleasure) or screenshotting the ‘thumbs up’ and smiley faces that come in through Intercom (why does that feel so good?). Yet as ubiquitous as these tools are, neither actually tells you how critical the new features are to your users. We struggled with this issue for years, so we developed a solution, and it only takes us about 10 seconds to execute.

Feature Fit Index: A One Question Approach to Gauging New Feature Success

What’s the real mark of a feature’s success? Is it some volume of early engagement? An increase in weekly or monthly active users? For us, the test is far simpler: Would users be disappointed if the feature was removed? 

Here at Parlor, we call this Feature Fit Index, and it has become integral to our product team. We use it to determine the success of recently released features, but also to identify which legacy functionality we no longer need to support or maintain. Here’s how it works:

Immediately after launching a new feature, we schedule an in-app survey consisting of a single question: “Would you be disappointed if [new feature/functionality] disappeared?”

We schedule the study to be delivered to users in-app 30-90 days after the new feature has been released. Why so long? For most functionality, it takes time for users to become aware of it, learn to use it, try it, and then engage with it enough to form an opinion. The more nuanced the feature or the smaller the user population for whom it is relevant, the longer we wait to deliver this Feature Fit Index survey, but we never wait longer than 90 days.

Understanding Your Results, i.e. What’s A Good Score

The industry gold standard for Feature Fit Index is Slack, with a product-wide FFI score of ~60% (i.e. 60% of respondents would be disappointed if Slack disappeared). Unlike Slack, we recommend keeping FFI specific to a single feature or bit of functionality, aiming for a goal score of 40%. If your result isn’t great, don’t overreact; the feature might be fine, but you could have just done a poor job at making users aware of it or driving engagement. You can always ask your users the same FFI survey more than once to see how successfully you’ve improved the feature (or the education of the feature) over time.

An important note:

Determining a feature’s success or failure shouldn’t be based on a single percentage number. A critically important factor is which users would or would not be disappointed.

For example, how would you respond if your 15 highest-paying customers said they would be disappointed if a specific feature disappeared, but the vast majority of your free tier users would not be disappointed? You would obviously call that feature a success, even though a larger number of users didn’t view it as critical to their experience. And that’s okay; not all users are worth the same to your company or product.

Build Smarter with Feature Fit Index

When applied correctly, Feature Fit Index is an incredibly time-efficient way to determine which features or services resonate with your users.

Not only does it help determine how successful new features were, it also shows where you can safely remove legacy functionality that users no longer value, yielding a lean product full of features and functionality that users love.