Check out what our CEO, Keith Frankel, has to say about CES!

How does your company measure the success of its product over time? We’re all familiar with general engagement metrics such as Monthly Active Users and user satisfaction scores such as CSAT or NPS. While your company’s primary success metrics will vary based on your product’s unique characteristics, a product’s quality and company’s bottom line will almost always be impacted by a key factor: friction. Specifically, your team should consistently reduce or remove areas of friction within your product experience which disrupt or otherwise inconvenience your users.

By identifying and taking steps to reduce the effort required for users to achieve their intended outcomes, you can dramatically improve your users’ engagement, satisfaction, and retention.  We call this process friction analysis, and it’s arguably the most important effort product teams should consider on a frequent and consistent basis. Fortunately, there’s a lightweight approach which can unlock the insights needed to address your product’s friction points, known as Customer Effort Score. Customer Effort Score (CES) is a single-question survey to quantify the relative difficulty of each of the different interactions and flows within your product. By understanding what CES is, why it matters, and how to implement it within your product, you can uncover opportunities for surprising product improvements that have a profound impact on how your users interact with your product. 

What is CES?

In essence, the goal of a Customer Effort Score is to measure the effort required to achieve an intended outcome with your product or service. 

Originally created by Gartner, CES was designed primarily as a customer experience survey for service organizations. According to Gartner, friction reduction was critically important for these service organizations because they “…should strive to be low effort because effort is the driver with the strongest tie to customer loyalty.” Despite this services focus, though, CES can be equally valuable for SaaS organizations since customer loyalty so strongly correlates with long-term customer retention and revenue growth. By tracking this metric and the drivers for it, a company can measure effort and identify points of friction in order to make improvements to their users’ experience with their product.

Why Measure Effort?

According to Gartner’s research, 96% of customers who identify a product or service experience as high-effort become disloyal, compared to only 9% who have a low-effort experience. In a Harvard Business Review study, 94% of customers reporting low-effort CES scores said they would repurchase, while 84% reporting high-effort said they would speak negatively about the company to others. When compared against Customer Satisfaction (CSAT) and NPS, CES outperformed the other metrics in predicting customers’ intention to keep doing business with a company, increase the amount they spend, or spread positive word of mouth.

(Source: Harvard Business Review)

As product leaders, we’re well aware of the importance increasing customer retention, highlighting the need to identify and improve high-effort experiences within our products. A few benefits of identifying and improving the user effort required to achieve intended product outcomes include: 

    1. Increased customer loyalty. There is a direct correlation between effort and customer loyalty. By taking steps to decrease friction in your product, you create more loyal customers who are willing to share positive word-of-mouth reviews with others.
    2. Increased customer retention. Making your product easier to use makes it more “sticky”. A customer will prefer one product over another if it allows them to achieve the same goals with less effort. By decreasing the effort required to use your product, you make it less likely that customers will churn and choose a competitor’s product over yours. 
    3. Identify potential roadblocks. A negative CES isn’t all bad. When identified early in a feature’s lifecycle, negative CES scores allow your team to address glaring issues before they cause greater harm. Use this feedback to make improvements before impacting more users and causing harm to your product experience or brand reputation.

Understanding CES: The Traditional Approach

Typical Customer Effort Score surveys are 5 to 7 point Likert scales which ask a customer to what extent they agree or disagree that the company, product, or service made it easy for them to address the issue in question. The scale’s nomenclature may vary depending on the surveyor; commonly, they appear as some scale of “Strongly Disagree” to “Strongly Agree”, but it’s not uncommon to see something more colloquial like “Effortless” to “Impossible”. 

The question format may also vary, but it always asks a user about the effort required to complete some task. In general, the nomenclature shouldn’t matter as long as it aligns with the question being asked. However, it is important that companies are consistent across all CES surveys so that survey results can be used against each other as a benchmark to better understand the aggregate results.

(Source: HubSpot)

A numerical value is assigned to each of the options on the scale, allowing for the calculation of a score which can be tracked and measured against. For example, on a 5-point scale from “Very Difficult” to “Very Easy”, the latter option would equal a 1 on the scale and the former would equal 5. A Customer Effort Score is then calculated by taking the sum of respondents who at least “Somewhat Agree” with the statement or fall within the “positive” range on the scale (for example, a score of 4 or above on a 5-point scale) and dividing it by the total number of respondents.

Gartner originally recommended a 5-point approach based on the belief that the difference between any response within the range of low-effort (e.g. “Easy” to “Very Easy” or “Somewhat Agree”, “Agree”, and “Strongly Agree”) was marginal and that the main goal is to move customers out of range of high-effort or neutrality. At the moment, though, there is no universal standard benchmark CES score. Generally, anything above a 5 on a 7 point scale is considered positive, and you should strive to convert high-effort respondents to low-effort respondents over time. A goal to strive for is a majority of survey respondents agreeing that the task in question requires something in the range of low-effort. For instance, on a 7-point scale of “Strongly Disagree” to “Strongly Agree”, you want the majority of respondents in the range of “Somewhat Agree”, “Agree”, and “Strongly Agree”.

Making CES More Valuable for Product Teams

Most service companies that adopt CES will take the traditional approach described above. They’ll ask users a single question (e.g. “How difficult was it for you to complete this task?”) with a collection of answer choices that range from “Very Difficult” to “Very Easy”. The primary problem with this approach for product teams is that it completely ignores the context of the user’s expectation of difficulty when engaging with the feature or flow in question. Simply put: users don’t expect every feature in your product to be equally friction free, nor do they expect the solution to each of their pain points to have the same degree of simplicity.

An example is helpful here: Nobody expects TurboTax to be as simple to use as Tinder. Does that make me hate interacting with TurboTax because it is relatively more difficult? Absolutely not. In fact, TurboTax has been considered a best-in-class product for years because it transforms a process which is typically viewed as arduous and complex into a straightforward effort which can be completed by most people in roughly an hour. It’s not the absolute simplicity that makes or breaks a product; it’s the relative simplicity within the context of a user’s expectation of difficulty which ultimately matters.

The question, then, is how product teams address this need for understanding user expectation when setting up our own CES surveys. Fortunately, this can be accomplished by slightly adjusting the format of the answer options, in particular by introducing a new answer option, “As Expected”. This middle-most answer option within your answer options replaces the space generally reserved for some neutral option if you’ve traditionally provided one. 

By introducing an “As Expected” answer option, you can now identify which of the flows inside of your product have the greatest discrepancy between user expectation of difficulty and actual perceived user difficulty. That way, instead of always focusing on making everything inside of your product as effortless as possible, you can focus instead on addressing the friction points in your product that have the greatest perceived degree of difficulty for your users.

Our approach dramatically simplifies how CES is calculated and interpreted. Let’s break it down:

We’ve anchored the goal to a baseline (“As Expected”), so all we need to do is calculate the average CES and compare this average to your baseline “As Expected”. Any score indicating levels of effort required that are “As Expected” or lower is great! Any score that indicates higher effort required than “As Expected” probably requires your attention. 

Parlor CES results UI

Use this interpretation to identify the points of friction within your product and take the necessary steps to improve the features or flows in question. After each iteration, continue to survey for CES to measure how these improvements change your user’s experience over time. 

The When and Where 

Most commonly, companies send CES surveys through email immediately after a user completes a specific flow or event. However, delivering CES surveys in-app is preferable in order to optimize for accuracy and engagement. In general, we recommend running this test for each primary task or behavior that your users must engage with in your product, such as onboarding. 

The best time to run this survey is often immediately after the task is completed, with repeated tests of the same task over time as you make improvements. However, keep in mind that it may require more than one encounter for a user to be able to accurately assess the amount of effort required for a particular part of your product experience. In that case, you may choose to delay the delivery of your CES until a user has engaged with the interaction in question a specific number of times. In general, the when and where will depend greatly on the specific product, service, or feature in question.

Wrap Up

Your product’s usability is one of the main factors driving customer retention and business growth. Thanks to CES, we have a lightweight tool to identify just how usable the features and flows in our products are. By measuring CES over time and using the findings to improve any points of friction that exist, you are turning your product into a fortress; one that prevents against customer churn and turns users into loyal fans who are less likely to leave you for a competitor.