Despite months of planning, design, and development, there’s little you can do about it – your newly released feature will never be perfect.

In fact, studies show that the initial cost of developing a feature is typically only one-third of the long term cost of maintaining it.

Given that every new feature is likely to evolve considerably over time, your team should be prepared to collect user feedback from from day one to ensure you don’t miss any insights you’ll need to successfully iterate.

Unfortunately, most teams do not have a structured plan for collecting and reviewing feedback immediately following their releases. As a result, all of the feedback and insights required to drive future iterations are instead collected randomly and anecdotally over time. When teams fail to plan feedback collection in this way, they unnecessarily prolong the time required to iterate and improve their features over time.

How Most Teams Collect Feedback Post-Launch

To better understand how product leaders collect user feedback, we’ve gathered insights from product teams at even some of the biggest companies. Today, we see post-release feedback collection take three forms: 

Watching screen recordings of user interactions with new features.

Everyone’s guilty pleasure. At the beginning of the day, you log into a session playback tool (FullStory is our tool of choice) and patiently await for individual recordings of users navigating the product. At best, you walk away with some acute usability fixes that result in changes akin to adding a ‘Back’ link, introducing an error message, or making specific fields mandatory. Although valuable in their own way, you are far less likely to walk away from the session with any actionable ideas on how to fundamentally improve the feature for your users.

Screen recordings provide a very granular look at how users navigate your UX decisions, helping you see where you need to spend short-term cycles to address specific micro-interactions. These insights can certainly prove useful, but in the grand scheme of things, these changes don’t really add meaningful value – they focus primarily on barrier reduction. 

Monitoring analytics platforms to find trends in user interaction.

Nothing makes a PM feel better than discovering some behavioral trend of their users. Thankfully, for non-technical PMs like myself, the abundance of available analytics tools makes this task as simple as pulling up a dashboard pre-populated with tracked events. With the click of a button, we can see where cliffs are, where conversion drops off, and aggregate trends of the user populations’ interaction with a new feature without even breaking a sweat. Unfortunately, analytics aren’t a silver bullet. 

Analytics platforms allow us to quantify trends in user behavior through multiple product steps and identify potential areas of friction at aggregate scale. As such, they help us identify areas of opportunity for improving our product. However, they do not – in and of themselves – provide insight into how to improve those experiences. After a trend is discovered, we almost always have to zoom back to a more granular level to determine next steps. Analytics provide evidence, but they do not provide the solution. 

Receiving and screenshotting inbound user messages or reactions.

We’re all familiar with this scenario: After a new feature announcement, a small handful of users will send us brief “Yay!” messages or thumbs up emojis. We screenshot it, post it to our internal Slack channels, and impatiently await our team’s celebratory “100” or “clap” reactions. These “atta-boys” certainly are nice, but they’re anecdotal at best and don’t provide any insight into why users are responding positively or how we can improve it for those who aren’t. Qualitative, unsolicited feedback like this – whether collected through you live chat or ticketing tool – simply does not tell you what to do next. 

A Framework for Post-Release Feedback Collection 

In user research, a methodology known as concept validation shows design concepts or prototypes to users early on in order to collect feedback and determine how well the concepts address users’ needs. Typically, concept validation involves a two-step feedback collection process: 1) an initial sentiment input, followed by 2) a post-sentiment survey of questions to analyze why users feel the way they do about the concepts in question. This same feedback format also works well as a way to collect feedback from users following a new feature release. Here’s how we recommend doing that:

Alongside your new feature announcement, include a single survey question designed to collect your users general sentiment, understanding that not every user will be willing to provide in-depth feedback. A “How do you feel about this?” question with a simple Positive/Neutral/Negative response is often plenty. We’ve seen teams do initial sentiment input a number of different ways, but we haven’t seen any compelling evidence to suggest that additional inputs or different input methodologies work better than this simple 3-point system.

After collecting this initial user sentiment, follow up with a simple post-sentiment survey. What you’re trying to understand is “Why do you feel this way?” Our recommendation is to provide users with predefined answer options, rather than an open text input. Although open text responses can provide more atypical, interesting responses from users, it’s much less scalable than a predetermined set of responses. Depending on how advanced you’d like to be, you can either use a single set of predetermined answers regardless of sentiment, or provide different input options based on each of the sentiment inputs. 

For example, let’s say we launch a feature and a user is neutral about that feature. We want to know why they feel this way. Our post neutral sentiment survey question asks “Why do you feel this way?” and includes the following options:

  • “I don’t understand it.”
  • “It’s not relevant to me.”
  • “I wouldn’t use it frequently.”
  • “I want something else instead.”
  • “It could be better.”

We follow this question with an open text field in which users can provide additional thoughts, if they choose. Even without these additional thoughts, though, each of these answers provides valuable insights to help us determine how well our release is resonating. Maybe we have an issue with positioning or education of the feature. Maybe the issue is only relevant to a small percentage of the user population. Maybe users overwhelmingly prefer that we had worked on something else instead. All of these are early indicators of potential next steps, both for this feature and our roadmap more generally.

Asking How Users Feel & Why Is All It Takes

Two-thirds of the long term cost of a feature is incurred after it’s initial release, meaning that any feature that survives the test of time will likely require serious iteration and improvement. If you don’t have a plan for user feedback collection alongside each of your new feature launches, you’re passing up on a critical moment in time to learn exactly how you’ll need to iterate. Implementing a solid feedback collection process will allow you to extract the most value from your feature releases so you can enter the iteration process armed with all the knowledge you need.