Every organization in the world claims to be “customer centric”.
“We love our customers!”, “We’re here to serve you!” & every other tag line you’ve ever heard.
But it’s also important (mandatory?) that the manner in which the company measures its own performance is “customer centric” as well.
Years ago, back in NY CitiPhone, we historically used “% of callers answered within 20 seconds” as the timeliness inductor for our call centers.
A “20-second wait” is considered the “maximum wait time” for a caller before he starts to become dissatisfied. Yes, of course, it can be longer or shorter depending on the person, his reason for calling, how busy he is & other factors.
20 seconds happens to equate to 4 normal rings & was originally used by Bell Systems & sales organizations as the upper threshold for a customer’s wait time.
So Citi adopted the “% in 20 seconds” as its timeliness indicator.
Many companies still used ASA/Average Speed of Answer as their timeliness indicator. It simply averages out everyone’s wait time. But it doesn’t necessarily tell you “how many customers were theoretically satisfied with their actual wait time” & “how many customers weren’t”.
And while there is a mathematical correlation between Service Level (% of callers answered within X seconds” & Average Speed of Answer, I’m always a little leery about averages. Large deviations from the average will impact the overall average in a “distorted way” (carry too much weight).
You can normally equate an 80% Service Level to a 11-12 second ASA, but I’ve always maintained, why measure an average when you can actually count the customers impacted.
Plus, we’re all familiar with the “the man with his head in the freezer & his feet in the oven is very comfortable – – on average” adage.
When I was with Citi, I fought long & hard with our Systems & Technology people on their performance indicators…especially the infamous “system availability time”, historically represented as a %.
When you think about 99% over the period of a month, then we’re talking about almost 7 1/2 hours of acceptable downtime (1% of the hours in a month).
But if those 7 1/2 hours occurred mostly on Mondays &/or during the weekday 9 AM-noon” timeframe, I would consider that as “terrible performance” since it could negatively impact up to 5-10% of the total # of customers we serve!
If that 1% downtime occurs during the graveyard shift when call volume is extremely low, then it’s no big deal.
I finally got our Systems partners to agree to keep the 99% standard, but change it to “forecasted customers impacted” instead of hours in a day.
An hour of downtime on Monday from 9:30-10:30 AM can destroy a call center for a good part of the day & negatively affect the majority of callers with the residual queue effect it causes.
I’m not losing any sleep whatever if the system’s down from 3-4 AM.
Take a close look at your business to see if your performance indicators are customer centric or not.
Do you measure the average time it takes to complete an investigation or to process an application or to deliver a package…or how many times (what %) do you complete within an acceptable timeframe.
If you promise a 3-day turnaround, what % of your customers received their answer within those 3 days. I really don’t care if your average was 2.4 days (although it’s still useful in engineering the process & calculating resources needed).
If you claim to be all about the customer, then demonstrate it in how you measure your performance.
And it’s certainly fine to use “% of items completed within the promised timeframe” as well as “average turnaround time”, but never forget the customer!
Thanks for listening!
Oh, BTW, with a show of hands, how many found this piece to be helpful in…