Lessons from 50 years of customer service automation: as AI adoption accelerates and AI agents are poised to function autonomously, human judgment and oversight are essential
by Intelliworx
When technology works as intended, it streamlines workflow, automates rote work and boosts productivity. The challenge is that it doesn’t always work as intended.
Sometimes this works out well. For example, the collaboration tool Slack was originally created as an internal communications platform for a gaming company. That focus didn’t take hold in the market, so the company pivoted and the rest is history.
At other times, it leads to unintended consequences. This is particularly true for anomalies and corner cases. These are the variety of issues that don’t fit neatly in a spreadsheet and break the binary limitations of traditional computing processes.
Over-reliance on automation in customer service
Customer service is the quintessential example. This function has become highly automated, and while it optimizes productivity for a company, it often does so at the expense of its customers’ time.
For example, you are working at your desk and experience computer issues. You search the available help desk articles provided by the supplier, you ask your colleagues for help, and of course, you reboot the machine in an effort to resolve the matter.
When those options do not work, you are forced to call the customer support line. Inevitably, your call is met with lengthy phone trees. An automated voice suggests checking the help desk articles – the same ones you already read and didn’t resolve the problem.
When you finally get a first-level support person on the line, their triage process is to start with the same basic steps you’ve already taken. It’s a frustrating experience for the customer – and it costs them a valuable resource they can never recover: time.
Despite adding more tools, customer issues grow
A recent survey of 1,000 Americans by Customer Care Measurement & Consulting (CCMC) and Arizona State University’s W. P. Carey School of Business places this into context:
- Support incidents have doubled. “77% of U.S. consumers reported experiencing a product or service problem in the past 12 months – a rate that has more than doubled since 1976.”
- Customer costs in wasted time. “59% of customers reported that their problem wasted time (an average of one full day).”
- Customer financial impact. “45% cited a financial loss” with “an average [cost] of $1,008.”
- Customers are increasingly angry. “64% of customers who experienced a problem felt rage about it, and 50% raising their voice to express displeasure, a record high.”
It’s not just customers that stand to lose out, either:
- Business risk revenue loss. “As a result, businesses are at risk of more than $596 billion in future revenue losses due to ineffective complaint handling and escalating customer dissatisfaction.”
It’s not that businesses don’t care; it’s that the technology systems and processes put in place obscure all but the tail-end of the customer experience (CX). In the struggle to apply finite resources for maximum benefit, they’ve become over-reliant on automation.
This didn’t happen suddenly. It evolved over the course of about 50 years. Businesses have steadily, and perhaps inadvertently, removed human judgment from the process. Those optimizations benefited companies in spreadsheets, but they have left many with blind spots. By the time a customer decides to cancel over the frustration, it’s too late to make up the lost ground.
This is the starting point we are at, collectively, with AI. It’s not alarmist to say the risks, depending on the industry, can be exponentially more consequential.
The ‘human in the loop’ imperative
Businesses have automated too many pieces of customer service. We should all collectively be wary of allowing this happen to other business functions as AI is adopted. There’s more at stake, too, because the work that’s done by the federal government or the healthcare community, for example, often involves genuine, real-world life and death scenarios.
While there are many possible AI use cases in healthcare, reporting shows many of them are focused on eliminating rote work. This is a pragmatic approach to AI. It stages the technology to handle routine tasks to free up a provider’s time to focus on the things that really matter.
The federal government is also working hard to find viable AI use cases. A recent study of federal CIOs put AI as the top technology priority. There’s good reason for this, too: intelligence, national defense, permitting, and social safety benefits are far more significant priorities than routine customer service tickets, like those mentioned above.
AI is accelerating automation – it’s not just that change is happening fast – the rate at which that change unfolds is accelerating as well. Where technology traditionally has been a tool to improve human productivity, AI is poised to do the production.
If pre-AI automation has points of failure for routine customer service matters, the severity of impact from AI agents acting autonomously on critical issues is exponentially higher. Keeping a human in the loop (HITL) seems like an obvious imperative.
Where humans should be kept in the loop
As a regulated industry, healthcare is taking a slow and methodical approach. The federal government is putting guardrails in place. Memorandums like M-25-21 and GAO-25-107933 are providing guidance – especially around “high-impact” use cases.
For example, the GAO articulates the necessity of human judgment for risk management:
“Ensure human oversight, intervention, and accountability suitable for high-impact use cases. When practicable and consistent with existing agency practices, agencies must ensure that the AI functionality has an appropriate fail-safe that minimizes the risk of significant harm.”
Even so, there remain elements of subjectivity. What is “practical and consistent” to one agency may have a different interpretation at another. This is compounded by the fact that many of the use cases are nascent or experimental. They haven’t been pressure tested with the asymmetrical and sometimes unpredictable characteristics of human behavior.
Take benefits determinations, for example. An AI agent designed to process disability claims will hit complicated cases: The medical documentation says one thing, the employment history suggests another, and the applicant’s narrative doesn’t cleanly match either of those. That’s not a binary decision. That’s a human judgment call with someone’s well-being at stake.
Principles for keeping a human in the loop
One possible risk mitigation strategy is to define a set of principles where human judgment is kept in the loop. These would encompass scenarios that include the following:
- Degrees of ambiguity;
- Ethical concerns that can’t be answered with a binary either/or decision;
- Conflicting data;
- Conflicting interpretations of data;
- Qualitative data sets;
- Judgement calls, particularly around judicial, security and surveillance issues, and
This list is certainly subject to refinement and isn’t comprehensive. We cannot account for every “what if” that can be imagined. Yet we must strive to plan for those extraordinary cases and provide a simple and sustainable path for escalation when AI goes awry.
AI is an incredibly valuable tool in the hands of subject matter experts. The U.S. should absolutely strive to maintain a leading role in the development and application of AI. Yet we should also take a lesson from the over-automation in customer service.
The survey above notes that the rate of customer service incidents has increased dramatically over about five decades at great expense. We can’t allow our aspirations for AI to create similar blind spots on use cases that are of far greater importance over the next 50 years.
Given all the information currently available, we believe the key is to ensure that we always keep a human in the loop.
* * *
Intelliworx has been providing purpose-built software to the federal government for 20 years and currently serves 40+ federal government agencies. The company is a certified service-disabled veteran-owned small business (SDVOSB) and is FedRAMP-authorized.
Contact us for a no-obligation demo.
If you enjoyed this post, you might also like:
Did someone say “AI?” 17 predictions about government technology for 2026