Criminalizing the Robot Once Again
You may have seen the story that we retweeted the other day about the call center agent in New York who was suspended for answering phones using a robot voice. If not, you can check it out on the SmartAction Twitter.
The story is certainly funny (though not for everyone – Mr. Dillon was suspended without pay for 20 days), but the anecdotes that come out of it are really interesting in the context of the customer experience.
The first takeaway is that callers are smarter and more informed about technology and its limitations. The story about the caller who hung up when she heard ‘the robot’ exhibits that customers have preconceived notions about good and bad technology. Whether or not callers know that call automation technology is even called an “IVR,” they do know a bad one when they hear it. Any IVR with such a robotic voice is a troubling sign, especially on a help desk call, which often requires a certain level of personalization and critical thinking. In fact, these are some of the most complex calls to automate; when this caller hung up upon hearing the robotic voice, she probably was smart in doing so (given her perception that she was speaking to a ‘robot’). Ironically, it was not a robot on the other line. It was a human agent.
Another conclusion worth noting is that Mr. Robot (Dillon) was “disgruntled,” according to the presiding judge, because he felt he was overqualified and, more importantly for this conversation, not properly trained for his current help desk role. With proper training, this entire situation may have been avoided; and, honestly, a lack of agent training is unacceptable these days. One of the key insights that we have learned is that by the time customers reach an agent, they are often already frustrated and demanding. With the growing adoption of highly functional self-service channels, customers generally revert to a live agent when unable to complete a task or when frustrated by some aspect of their interaction with the company. With this in mind, it’s so important that when those irritated callers finally reach an agent, he/she is well-trained: knowledgeable, empathetic, capable, etc. Mr. Dillon does not seem to possess these traits. He shoulders much of the blame for this, but so does the organization for not giving him the proper training he needed to do his job happily and well.
All in all, the fact that a judge heard the case and determined that a robotic voice, which may or may not have been put on to deliberately confuse callers, was “unacceptable and unprofessional” is quite striking. It is interesting that mimicking a bad robot voice is punishable by suspension, but retaining a terrible IVR, which more or less sounds like the robotic voice Mr. Dillon was using, is acceptable. The terrible IVR can actually perform fewer and less complex tasks than the poorly trained Mr. Dillon, yet this wouldn’t even be a story if people complained to the NYC Department of Health and Mental Hygiene about a robotic-sounding IVR. We’re not by any means sanctioning Mr. Dillon’s behavior nor are we disagreeing with his employer’s decision to discipline him. But it does provide an interesting and relevant parallel to the world of automation.