6 Key Metrics to Prove the Impact of Design on Your Customer Experience

Jules Prick
Written by
Jules Prick
Partner
Dec 21, 2017 . 13 mins read
Share this article

At Koos, we help organisations to improve their products and services through human-centered design. In our experience, we see companies increasingly recognising the business value of design. However, many still fail to measure their design efforts, let alone effectively implement the right metrics.

This creates a difficult tension – ‘We believe we designed a great product, but does it actually yield anything?’

After working alongside a diverse set of organisations, we understood what metrics are essential to a good start. In this blog, we propose six Key Performance Indicators (KPIs in short) to measure the effect of your design efforts.

The 6 Key Metrics for Design Impact

1. Net Promoter Score (NPS)

The first KPI –  the widely used Net Promoter Score (NPS) is a loyalty metric developed by Bain & Co.

NPS is about overall happiness about your services, not about the latest interaction, nor does it give you details on journey level or product features. It is a general indication of the loyalty performance of the company and should give C-level management an indication of customer experience strategy and needed investment.

NPS is loved because of its simplicity. With one question you get endless benchmarking opportunities and it often correlates strongly with business growth and retention.

On the other hand, it is criticised for oversimplifying the real business complexity, lack of actionable insight, cultural bias (responses can vary significantly across cultures) and its general misuse as a standalone performance indicator.

Companies like Airliner, Transavia, and tourism company TUI have found a strong relationship between the increase of NPS and net revenue, making a good case to invest in design for C-level management.

How does it work?


Surveys are to be sent to a random sample of your customer base – not only the one with recent interactions – periodically asking:

“On a scale of 0 to 10, how likely are you to recommend our product/service to a friend or colleague?”

Net Promoter Score is a number in percentages, ranging from -100 to 100, that represents the sentiment surrounding your services. Customers fall into one of three categories – detractors, passives or promoters. NPS is calculated by subtracting the percentage of detractors from the percentage of promoters.

The higher your NPS is, the more loyal your customers will be.

 

  • Detractors are not satisfied with your service offering, grading your services with a 1–6, and are unlikely to do business with you again. Even worse, they can be so unsatisfied that they might impede your growth by spreading negative reviews.

 

  • Passives fall somewhere in the middle, grading a 7 or 8. They are quite satisfied but not happy enough to actively recommend your service to their peers.

 

  • Promoters are customers who will actively share how great your services are with their peers and promote your service. They are more likely to adopt more of your services or use them more frequently. Customers grading you with a 9 or 10 are considered Promoters.
The three categories of the NPS score; detractors, passives and promoters

2. Customer Satisfaction Score (CSAT)

Although the majority believes NPS measures satisfaction (it is actually loyalty), the CSAT score is the one KPI for it.

Another important difference is that CSAT is used on an interaction level, meaning CSAT questions can be asked after each service interaction (not every interaction for each customer… please). Asking follow-up questions,  allows you to understand more about why your customers are (or not) satisfied – especially when combined with explorative research.

You could say this metric is invaluable for product, service and journey teams to focus their initiatives in the direction where design efforts are most needed – while supporting the bigger strategy obviously.

Ensure you are not improving service interactions as if they are in a vacuum. Research by McKinsey shows that improvement efforts on a journey level are better for increasing customer recommendation, differentiation and in the end – cold hard cash!

How does it work?

To obtain your CSAT score, ask your customers how satisfied they are with their latest interaction. The rating scale can differ from a 5 or 7-point Likert scale to a 1–100% scale.

Within the CSAT survey, additional questions are asked to probe customers to tell why they are satisfied or not. This means that, unlike NPS, CSAT can give you great pointers to service improvement opportunities when asked after each service interaction.

An example of a CSAT survey question

3. Customer Effort Score (CES)

The CES score was introduced by CEB in 2010 and it is mentioned in the 2013 book “The Effortless Experience”.

Where NPS and CSAT scores focus on creating a more enjoyable customer experience, CES focuses on reducing customer effort. Basically, we could say this is the amount of effort a customer needs to make to get their customer job done (like paying a bill, for example).

Although CES is often only used by customer service to measure issue handling, it could be used as an important business objective for other journeys (and supporting processes) as well – especially those where effort or time spent plays a critical role in the overall experience or product like ordering something online, booking an appointment or returning your rental car.

 

If your team wants to start using CES, we have some advice from a design perspective:

  • Focus. You do not have to measure CES for all interactions. Define which journey phases the CES can play a critical role in.

 

  • Map it. Do not fix symptoms while the problem persists. CES will only be of value if you can identify specific points of friction and opportunities for improvement that will improve the total experience – on a journey level.

 

  • Ask why. The CES score traditionally does not ask customers why they find a certain interaction effortless or not. By asking the why question after the rating, you will gain insight into the barriers that prevent a service interaction from being easy and effortless. Conduct some interviews when you have to.

 

  • Make it objective. While CES surveys measure customers’ perceived effort, you can also assess objective measures such as task success rate, time on task, or user error rate. These metrics objectively reveal where customers struggle to complete tasks or interactions and where they encounter barriers.

 

Research by Oracle showed that customer satisfaction increased from 61% to 93% when customers reported low effort. Additionally, a one-point decrease in CES (a 20% decrease in your score) shows a 14% increase in intent to repurchase.

Sounds nice right?

How does it work?

To determine the CES score, companies need to ask the following question after an interaction:

“On a scale of 1 to 5, with 5 being the highest effort, please indicate the total effort required by you to complete your [insert interaction here].”

 

The responses will provide a Customer Effort Score as follows:

 

The sum of All Customer Effort Scores
— — — — — — — — — — — — — — —— —— —  =  Customer Effort Score
Total number of responses

4. Customer Lifetime Value (CLV)

Aligning design projects with more common financially focused metrics can help to prioritise and finance your design initiatives with management.

Take Customer Lifetime Value (CLV), a prediction of the net profit attributed to the entire future relationship with a customer. Many business KPIs contribute to a high Customer Lifetime Value, such as margin, cross-sell rate, upsell rate, acquisition cost and rate, retention rate and churn rate – all metrics that can have their value when combined with certain interactions and journey phases.

Research has shown that 55% of all customers are willing to pay more for a superior experience, being able to increase margins.

The key metric for financial value is the Research that has shown that 55% of all customers are willing to pay more for a superior experience.

Customer retention is cheaper than customer acquisition, and a great customer experience is proven to drive loyalty and increase retention. A design-driven approach could achieve better cross- and upsell, as proved in our collaboration with JustLease. In this project, we applied service design methodologies and design sprints to increase customers’ financial value, aiming for an increase in service revenue.

How does it work?

The way CLV is calculated differs per organisation and can range from a simple crude heuristic to a complex calculation. Either way, it shows the impact of design efforts on the hard bucks.

5. Cost to Serve (CtS)

Another useful metric is focused on the costs that are being made to serve a specific customer. This is specifically helpful within organisations where management is not yet convinced of the value of customer experience.

At Koos, we have started many first design initiatives on this principle. Let’s face it – which organisation does not prioritise cost reduction?

 

Call Reduction

A typical client request related to CtS is “We need to reduce the number of calls to Customer Service”. Expenses might also include customer-facing staff, sales reps or internal system costs. Other requests related to digitalisation or the design of self-help capabilities are ways to reduce serving costs by means of service design.

A powerful first approach is co-creation; we put people from different departments and expertise together in a room and start mapping the service as a whole. A great tool for this is our Customer Journey Ecosystem. This often results in departments realising that if they work together, they can offer the service more effectively and reduce costs by doing so.

Co-creation session of Design Doing trainings for the Municipality of Amsterdam

Service blueprint

Service Blueprint is the #1 tool to look into cost reductions from a service perspective. By mapping out the internal processes needed to serve the customer well, internal pain points, biggest expenses and opportunities for serving efficiency can be identified. 

While some designers argue that service design aims to enhance customer experience rather than directly reduce costs, improving the customer experience often translates into a reduction in service costs as a beneficial side effect.

The American telco Sprint has stated that by improving the customer experience, they’ve managed to reduce their customer care costs by as much as 33%, according to HBR.

6. Time to Market (TtM) and Innovation Success Rate (ISR)

Other design-driven KPIs often overlooked are the Time to Market and Innovation Success Rate.

With new products and services increasingly entering the market and competitors luring customers to copy your services, the pace and success rate with which services are brought to market are often important differentiators.

By pinpointing the unmet customer value, design enables companies to create new business value. Secondly, design accelerates and de-risks the innovation process through co-creation and experimentation.

Our practical examples:

Service Design contributes to a faster Time to Market and a better Innovation Success Rate in three different ways:

  • Pinpoints value. A global management consultancy states that 48% of the R&D budget is wasted due to weak insights, meaning what is developed is not what customers want. Design research methodologies can give you deep customer insights and pinpoint real customer value at the beginning of the innovation process.

 

  • Creates Alignment. Implementing service design strategically and looking at innovations or service propositions from a customer’s perspective fosters better alignment across departments. This enables all teams to know where to focus, delivering more targeted innovations to the market in a timelier manner.

 

  • Allows Early Validation. By creating quick prototypes, you can validate your biggest assumptions with real customers at the early stages of the design process. This not only makes you more prone to adjust your course but also heightens your success rate.

 

As David Kelley likes to state:

“Fail fast to succeed sooner.”

Start getting a grip on your Customer Experience

There are many ways to measure your design impact and the right approach will differ per company.  We have identified four key strategies to maximise your design and innovation ROI.

  • Take a holistic view. Data gains value when contextualised. Zoom out and align your metrics around your key journeys to maximise outcomes.
  • Ask why. Beyond numbers, delve into customer motivations. Qualitative insights drive more effective innovation than data alone.
  • Take Action. Insight is only valuable when you do something with it. Despite widespread feedback collection, only 29% of companies act on it. Integrate design and data teams for better execution.
  • Compare. In the end, you will need to compare the status quo with the new situation. So make sure you have some existing data / KPIs to compare with.

 

Looking for a prime example of this strategy in practice?

You can read about our partnership with the Dutch Railways (NS). Together, we implemented journey management connecting their most important metrics from all their macro journeys down to user flows.

Jules Prick
Written by
Jules Prick
Partner
Dec 21, 2017 . 13 mins read
Share this article