Here are easy-to-understand explanations of top metrics the federal government can use to measure the success of its investment in CX
by Intelliworx
Customer experience (CX) is one of those trends that happened gradually over time but feels like it happened overnight. The sudden interest is anything but sudden.
The government has seized this trend with vigor and for good reason: technology should enable things to run more efficiently and effectively.
For example, merely turning the tens of thousands of government paper forms into a PDF format, is applying analog thinking to a digital world. The static nature of a PDF means every form has to accommodate all possibilities – even if the possibility doesn’t apply to the person filling out the form. This is, in part, behind what OPM calls a “time tax.”
By contrast, dynamic forms only ask for the information needed – a recognition that form is merely the beginning of a business process. As a result, everyone is spared from tasks such as retyping information the government already has. It reduces the level of effort required for everyone involved – from constituents to civil servants.
Measure effort to justify CX projects and measure the returns
Effort levels have been used to make the business case for the government’s investment in CX. For example, as tech analyst Laura DiDio wrote of CX metrics in FedTech:
“The Food and Drug Administration gave the public and industry through July to comment on its Customer Experience Strategy for boosting satisfaction with its IT solutions by upping their accessibility, streamlining processes, easing adoption, and emphasizing engagement and feedback. The IRS uses negative feedback in particular to identify pain points and determine what is causing the problem.”
Importantly, this also very clearly benefits the government too. As Martha Dorris, who some endearingly attribute the title of Godmother of Government CX, pointed out to us in an interview, if the tax filing process was simplified, the U.S. Treasury would naturally see better cash flow.
That’s a good example of using effort to measure the return. This was a key point GAO Chief Information Officer Beth Killoran mentioned in a discussion with Dana Sukontarak, who was formerly an editor with the Federal News Network:
“The goal is to reduce that effort over time so that they are able to focus on accomplishing their mission and producing the work products they have and being able to build the legislation that helps the American people.”
We couldn’t agree more. In fact, that’s one of several predictions our leadership team has for government this year: precise and empirical measures of reduced effort with the same or better outcomes. In other words, the reduction in OPM’s “time tax” could become an effective standard measure of the returns gleaned from the government’s investment in CX.
From a technological perspective, it’s merely a matter of analytics. Conventional web analytics have long-tracked time-on-page. This is simply applying that measure to common government processes that are being moved to digital formats – such as applications for permits, benefits or tax filing.
5 CX metrics to prove value
Measuring time-to-completion is just one way to measure CX and is rather novel. There are a number of existing ways the private sector has measured CX for years. These are all very worthy of exploration by government agencies – indeed some agencies have implemented some of these already.
1. Return on investment (ROI)
Return on investment, or ROI, often gets conflated with benefits. These are similar but they are not the same. Typically benefits are illustrative or anecdotal. By contrast, the proper interpretation of ROI is a mathematical formula:
ROI = (Benefit of initiative – Cost of initiative) / Cost of initiative x 100
We believe there are two ways government agencies can use this formula:
- Cost avoidance or reduction; where a CX project reduces the amount of interaction or support the government needs to provide to answer questions about applying for benefits for example.
- Cost to constituents; it’s conceivable to develop an average economic value per unit of time for citizens – not unlike the “billable hour” used by lawyers and consultants. This would require sound economic diligence to develop, but it could also provide a standardized measure to compare CX returns across agencies.
Arguably, ROI is not a direct measure of CX from a user perspective, but it is absolutely essential from a taxpayer and a federal budget perspective.
2. Customer Satisfaction (CSAT)
Customer satisfaction scores or (CSAT) is a fancy name for a simple survey – often with just one question:
- How satisfied are you with _____?
This is measured on a 5-point scale (i.e. Likert) such as:
- Very unsatisfied
- Unsatisfied
- Neutral
- Satisfied
- Very satisfied
Some agencies use this effectively now, however there are caveats. Primarily, be sure the CSAT question is clear and precise, otherwise you can wind up with misleading conclusions.
For example, a frustrated constituent might leave negative feedback after a support call. Leaders may be inclined to attribute the dissatisfaction to the individual support representative when in the constituent’s reality, they are grading the overall experience.
3. Customer Effort Score (CES)
A customer effort score (CES) is often a similar one-question survey:
- Please state your agreement with the following: ABC agency makes it easy to handle my issues.
Answers are also measured on a five-point scale:
- Strongly agree
- Agree
- Neither agree nor disagree
- Disagree
- Strongly disagree
What is the difference between CES and the measure of effort described above (i.e. ToP)? The former is behavioral data derived from an analytical measure of time in a given platform to complete a process. By contrast, the latter is a measure of opinion or feeling.
It’s important to have both. This is because people need to be heard – and yet also have a tendency to provide answers to survey questions that are at odds with their behavior.
4. Net Promoter Score® (NPS)
The Net Promotor Score (NPS) was invented by Bain & Co. consultant Fred Reichheld, whose research found that “ a 5% increase in customer retention produces more than a 25% increase in profit.” This is because returning customers tend to “buy more from a company over time.”
The government does have products too where this could be an appropriate indication of CX. For example, this would have been very useful in pilot programs for the new FASFA form and the IRS Free File program.
An NPS survey is also a one-question survey:
- How likely are you to recommend this product to a friend or colleague?
The answers are scored on a 10-point numerical scale, which are then broken out into three distinct categories based on the value they provide:
- Promoters are those who scored 9 or 10
- Passives are those who scored 7 or 8
- Detractors are those who scored 0 to 6
The score is then calculated with this formula:
- NPS = percentage of promoters – percentage of detractors
As a result, NPS scores can range on a scale from -100 to +100. Any positive number is generally considered “good.”
For example, if 60% of respondents are promoters, 30% are passives and 10% are detractors the result looks like this:
- NPS = 60% – 10%
- NPS = 50%
An NPS score of 50 is solid.
5. Average time to resolution (ART)
The average time to resolution was born from the helpdesk and support center. For example, when the network goes down and you submit a ticket, the clock starts running to measure how long it takes to resolve the issue.
ART is often measured in minutes, hours and days. Since some incidents can be more labor-intensive than others, it’s important to categorize them by type. This provides for the proverbial “apples-to-apples” comparison later.
This metric can be easily adopted to measure the average time to complete a process. This is different from a similar example above where the data is derived from time in an application – this measures the elapsed time for a multi-stage process from the starting point to the finish.
For example, how much time elapses, on average, between the start of an application for a permit to a decision? Or how much time elapses, on average, between the start of an application for social security benefits and a decision?
Caveats to CX metrics
There are several caveats to consider when choosing metrics.
First, all the metrics above are fairly common, but that doesn’t mean every one of those is relevant to your agency. It’s important to make thoughtful decisions about metrics; be sure they give you as complete a picture as possible.
Second, as the management guru, Peter Drucker is attributed with saying, that which gets measured gets managed. Yet this cuts two ways because what gets managed can be gamed. Gamed metrics undermine the purpose of having them in the first place.
Third, metrics can and should evolve over time. Things change over time, and metrics should be added, pruned, and modified in tandem with user behavior or organizational goals.
Finally, and most importantly, metrics are usually directional not conclusive. It’s important to keep track of anecdotal evidence and consider the holistic impact of any given CX project.
* * *
Intelliworx serves federal agencies big and small with a range of solutions including application management, government workflow and financial disclosure. We’d welcome the opportunity to show you rather than tell you – you are welcome to request a no-obligation demo.
If you enjoyed this post, you might also like:
SBA certifies Intelliworx as a Service-Disabled Veteran-Owned Small Business (SDVOSB)