Check out what our CEO, Keith Frankel, has to say about CES!
How does your company measure the success of its product over time? We’re all familiar with general engagement metrics such as Monthly Active Users and user satisfaction scores such as CSAT or NPS. While your company’s primary success metrics will vary based on your product’s unique characteristics, a product’s quality and company’s bottom line will almost always be impacted by a key factor: friction. Specifically, your team should consistently reduce or remove areas of friction within your product experience which disrupt or otherwise inconvenience your users.
By identifying and taking steps to reduce the effort required for users to achieve their intended outcomes, you can dramatically improve your users’ engagement, satisfaction, and retention. We call this process friction analysis, and it’s arguably the most important effort product teams should consider on a frequent and consistent basis. Fortunately, there’s a lightweight approach which can unlock the insights needed to address your product’s friction points, known as Customer Effort Score. Customer Effort Score (CES) is a single-question survey to quantify the relative difficulty of each of the different interactions and flows within your product. By understanding what CES is, why it matters, and how to implement it within your product, you can uncover opportunities for surprising product improvements that have a profound impact on how your users interact with your product.
What is CES?
In essence, the goal of a Customer Effort Score is to measure the effort required to achieve an intended outcome with your product or service.
Originally created by Gartner, CES was designed primarily as a customer experience survey for service organizations. According to Gartner, friction reduction was critically important for these service organizations because they “…should strive to be low effort because effort is the driver with the strongest tie to customer loyalty.” Despite this services focus, though, CES can be equally valuable for SaaS organizations since customer loyalty so strongly correlates with long-term customer retention and revenue growth. By tracking this metric and the drivers for it, a company can measure effort and identify points of friction in order to make improvements to their users’ experience with their product.
Why Measure Effort?
According to Gartner’s research, 96% of customers who identify a product or service experience as high-effort become disloyal, compared to only 9% who have a low-effort experience. In a Harvard Business Review study, 94% of customers reporting low-effort CES scores said they would repurchase, while 84% reporting high-effort said they would speak negatively about the company to others. When compared against Customer Satisfaction (CSAT) and NPS, CES outperformed the other metrics in predicting customers’ intention to keep doing business with a company, increase the amount they spend, or spread positive word of mouth.

As product leaders, we’re well aware of the importance increasing customer retention, highlighting the need to identify and improve high-effort experiences within our products. A few benefits of identifying and improving the user effort required to achieve intended product outcomes include:
- Increased customer loyalty. There is a direct correlation between effort and customer loyalty. By taking steps to decrease friction in your product, you create more loyal customers who are willing to share positive word-of-mouth reviews with others.
- Increased customer retention. Making your product easier to use makes it more “sticky”. A customer will prefer one product over another if it allows them to achieve the same goals with less effort. By decreasing the effort required to use your product, you make it less likely that customers will churn and choose a competitor’s product over yours.
- Identify potential roadblocks. A negative CES isn’t all bad. When identified early in a feature’s lifecycle, negative CES scores allow your team to address glaring issues before they cause greater harm. Use this feedback to make improvements before impacting more users and causing harm to your product experience or brand reputation.
Understanding CES: The Traditional Approach
Typical Customer Effort Score surveys are 5 to 7 point Likert scales which ask a customer to what extent they agree or disagree that the company, product, or service made it easy for them to address the issue in question. The scale’s nomenclature may vary depending on the surveyor; commonly, they appear as some scale of “Strongly Disagree” to “Strongly Agree”, but it’s not uncommon to see something more colloquial like “Effortless” to “Impossible”.
The question format may also vary, but it always asks a user about the effort required to complete some task. In general, the nomenclature shouldn’t matter as long as it aligns with the question being asked. However, it is important that companies are consistent across all CES surveys so that survey results can be used against each other as a benchmark to better understand the aggregate results.

A numerical value is assigned to each of the options on the scale, allowing for the calculation of a score which can be tracked and measured against. For example, on a 5-point scale from “Very Difficult” to “Very Easy”, the latter option would equal a 1 on the scale and the former would equal 5. A Customer Effort Score is then calculated by taking the sum of respondents who at least “Somewhat Agree” with the statement or fall within the “positive” range on the scale (for example, a score of 4 or above on a 5-point scale) and dividing it by the total number of respondents.
Gartner originally recommended a 5-point approach based on the belief that the difference between any response within the range of low-effort (e.g. “Easy” to “Very Easy” or “Somewhat Agree”, “Agree”, and “Strongly Agree”) was marginal and that the main goal is to move customers out of range of high-effort or neutrality. At the moment, though, there is no universal standard benchmark CES score. Generally, anything above a 5 on a 7 point scale is considered positive, and you should strive to convert high-effort respondents to low-effort respondents over time. A goal to strive for is a majority of survey respondents agreeing that the task in question requires something in the range of low-effort. For instance, on a 7-point scale of “Strongly Disagree” to “Strongly Agree”, you want the majority of respondents in the range of “Somewhat Agree”, “Agree”, and “Strongly Agree”.
While there are a number of reasons to measure customer effort, there is a disadvantage to this metric that is important to note. The first being that customer effort only applies to a single aspect of the product and not the business as a whole. So although you will be able to track the amount of effort required for a particular task or service, there’s not an indicator for what exactly about the task was difficult or why the process itself was high effort. Along these same lines, the impact of factors such as cost, competition, and product quality is not taken into account for this metric.
CES: When and Where
Most commonly, companies send CES surveys through email immediately after a user completes a specific flow or event. However, delivering CES surveys in-app is preferable in order to optimize for accuracy and engagement. In general, we recommend running this test for each primary task or behavior that your users must engage with in your product, such as onboarding.
The best time to run this survey is often immediately after the task is completed, with repeated tests of the same task over time as you make improvements. However, keep in mind that it may require more than one encounter for a user to be able to accurately assess the amount of effort required for a particular part of your product experience. In that case, you may choose to delay the delivery of your CES until a user has engaged with the interaction in question a specific number of times. In general, the when and where will depend greatly on the specific product, service, or feature in question.
Wrap Up
Your product’s usability is one of the main factors driving customer retention and business growth. Thanks to CES, we have a lightweight tool to identify just how usable the features and flows in our products are.
However, most service companies that adopt CES will take the traditional approach described above. They’ll ask users a single question (e.g. “How difficult was it for you to complete this task?”) with a collection of answer choices that range from “Very Difficult” to “Very Easy”. The primary problem with this approach for product teams is that it completely ignores the context of the user’s expectation of difficulty when engaging with the feature or flow in question. Simply put: users don’t expect every feature in your product to be equally friction free, nor do they expect the solution to each of their pain points to have the same degree of simplicity.
The question, then, is how do product teams address this need for understanding user expectation when setting up our own CES surveys? Fortunately, this can be accomplished by slightly adjusting the format of the answer options. Our approach dramatically simplifies how CES is calculated and interpreted, so you can improve any points of friction that exist and prevent against customer churn.
For a full in-depth look into all things CES and turning your users into loyal fans download our guide!
Yonas Dinkneh
Product Manager at Parlor. Enthusiastic about spicy foods and cargo pants.