Customer Satisfaction Score (CSAT) is the oldest and most widely used customer satisfaction KPIs. It’s also one of the most popular north-star metrics amongst support teams.
As with all metrics, it’s not without its complications. One is how to put the score into perspective. Is a CSAT of 80% good or bad? Zendesk and ACSI have great benchmarking reports that can help with this.
But, there’s a complication that’s often overlooked. CSAT is not always measured in the same way and if you don’t know which approach is being taken this makes comparisons difficult. It’s also possible the method your tool is using is artificially inflating your CSAT score.
Fundamentally everyone agrees what to measure. CSAT is the proportion of responses to a CSAT survey that are positive. It’s deciding which scores to include for a given timeframe where things diverge.
CSAT is always quoted for a time period e.g. today, this week, past 7 days, this month etc. The ambiguity comes from which responses to include when defining ‘today’, ‘this week’ etc. This is because there are two timespans at play - the date the score is given (or modified) and the date the evaluated ticket was created.
Like most people I was oblivious to this inconsistency until I came across discrepancies between systems myself and noticed people asking about similar mismatches. This is because some systems define “past 7 days” as tickets that have been both created in the past 7 days and given a score in the past 7 days. There are also subtleties as to whether the past 7 days include today or starts from yesterday, but that’s a different story! Other systems like Zendesk Explore understand “past 7 days” as tickets that have been both solved and given a score in the past 7 days. Finally, other systems like Geckoboard, will include all tickets that received a score within the set time period irrespective of when it was created or solved.In other words, when selected “past 7 days” you’ll see what’s been scored in the past 7 days even if the evaluated tickets were created or solved outside of that period.
I’m of the opinion that using the date the score is given (or modified) as a reporting period, and not restricting the score to only tickets created within that time frame, is far better practice. There may be times where you want to compare cohorts of customers, and in that case filtering by when the ticket was created makes sense, but these specialised reports as opposed to the true measure of your current customer satisfaction.
Give your support team a boost
With Geckoboard’s real-time dashboards, everyone knows exactly what’s happening in CS right now.
Find out moreFiltering to only include tickets raised in the timeframe you’re reporting on artificially inflates your CSAT score. It’s well known there’s a strong relationship between the speed at which a ticket is resolved (Time To Resolution) and satisfaction. Tying the creation date of a ticket to its evaluation date risks hiding insights from dissatisfied customers. Tickets rated “bad” due to a sluggish resolution time will slip through the net.
If the method you’re using to calculate CSAT doesn’t include these then you won’t see the effect of these tickets on your score, and you’ll miss opportunities to turn things around. What if you can still convert that dissatisfied customer into a satisfied one?
There’s no doubt that the CSAT metric is a powerful tool, but like with most tools it’s worth going through its manual to check what they do exactly. Do you know what exactly is used as a reporting period for CSAT on your support system?