Don’t Blame the Agentic AgentDon’t Blame the Agentic Agent
Agentic AI could be the breakthrough everyone wants, but it’s limited by the rigid policies and systems in customer service.
April 21, 2025

Last month at Enterprise Connect, vendors showcased incredible demos where AI agents performed as well as (or better than) human agents. These demos typically involve a frustrated or confused customer who needs assistance. The agentic-AI-powered bot accurately understands the issue and accesses multiple systems that ensure a satisfied customer.
The nature of digital customer service means there's no hold time or need for the usual ‘call volumes are higher than normal’ recording. Generative AI makes the conversation natural. The multi-task nature of these agentic demos eliminates the need to transfer the customer and have them retell their tale. Security authentication can be automated or smoother as it’s preferable to share secrets, like an ATM PIN code, with a machine than a human.
Agentic AI could be the breakthrough everyone wants: a technology that aligns with the cost-cutting pressures of the business. Enterprises are drooling over the potential cost savings of reducing or eliminating the costs of customer service agents. It may be possible to lower operational expenses and simultaneously deliver improved customer satisfaction.
Comparable CSAT, But It Will Cost Less
I've got news for you, Sunshine. It will fail in most cases, even with the best agentic AI technologies. Most believe the issue or limitation is the agentic AI technology itself. Even in its early stages, the agentic AI technology is truly impressive. The real limitations are the rigid policies and systems in customer service.
AI agents offer automation and scalability, but they will, in most cases, reference and implement policies and systems that are bad for customer service. Today's mostly human-powered customer service is terrible. Forrester studied it and concluded that customer experience in the U.S. continues to decline, sitting at an all-time low.
The goal of most agentic AI implementations in CX is not to improve overall customer service but to deliver comparable CX at a lower cost. When deployed at scale, Agentic AI is a force multiplier. Unfortunately, it will multiply the effects of bad service. Even worse, nobody will blame the real culprit: policies and systems that limit (human and automated) agents; they’ll blame the tech and the human champions that implemented it.
Personally, I am frequently frustrated after an interaction with a contact center, but it’s not the agents that upset me. I know this because they often send me a survey about my experience, and I can’t wait to let them have it. But I don’t. That’s because the surveys generally target the helpful agent who said there was nothing that he or she can do to help me. The issue is the systems, processes, and policies of the modern contact center (and the surveys don’t ask about those). Half the time, the agent agrees that my issue or request is perfectly reasonable.
Here's a simple example. My airline ticket had my first name misspelled. This was causing a surprising number of issues and was preventing flights from counting toward loyalty programs. Service agents are unable to edit the name field of a booked ticket. The folks who make these types of corrections are not in the call center (nor at the airport). The agent suggested I claim the miles after the flight, and hope that the gate agent doesn’t notice the misspelling. At best, the agentic AI agent may be just as useless but could be much more bothered by the misspelling.
Still Need Humans In the Loop
A bot can deliver a better, more efficient experience than human agents. There are indeed times I prefer to deal with the bot. But there are also times when I know my situation is too unusual for a bot. Most automated solutions are designed for common issues. In these cases, my goal is for the bot to escalate me to a human agent where I can explain my situation.
In Google's recent AI demonstration at its Next Conference earlier this month, the bot reached its limit when a customer asked for a high discount. But, instead of escalating the customer to a human agent, the customer was effectively put on hold while the agentic bot escalated itself to a human supervisor. The human supervisor approved a (lower) discount that the bot took to the customer and closed the deal (hooray). It was a beautiful and relatable journey, but there are other ways this tale could have unfolded.
My concern, as a customer, is that they automated out the 'escalate to human' routine! This is the same thing car dealers do when negotiating the price of a car, an experience that no one enjoys. The salesperson, supposedly on the customer's side, ferries offers and counters to and from a mysterious decision-making sales manager. Like the right to face my accuser, I want to talk to the sales manager directly.
My concern lies in policies that are not flexible enough for common sense. Consider something like the 30-day return policy. "Hello, I would like to return something I bought 35 days ago. It was lost in shipping and just arrived. I no longer need it." A human agent could agree that the delayed package warrants being flexible about the refund window; the agentic AI probably won’t.
Advent of Faster and Worse Service?
Unusual situations are difficult to automate in any CX system. These situations are especially challenging for customers to navigate because modern contact center flows are designed to filter out common requests and escalate exceptions. Trivial barriers, like "our call volumes are higher than usual," are designed to either increase the cost for the customer (“Ugh, never mind”) or get them to switch to a less busy timeslot. Agentic AI doesn’t need those crutches, so it will result in faster and easier bad service. Agentic AI will generally attempt to resolve more use cases and escalate to humans less frequently.
As remarkable and impressive as agentic AI technology is, it isn’t designed to identify and improve bad customer service policies. Enterprises need to be prepared to reexamine policies and workflows, or they will see a big drop in CSAT once human judgment is excised from CX.
Forrester reported the obvious: that customer service is bad – and that was in the pre-agentic AI era. Anyone who’s hoping to use agentic AI to solve their CX challenges needs to think through what’s broken and fix it before amplifying it.
--
Dave Michels is a contributing editor and Analyst at TalkingPointz.
About the Author
You May Also Like