Ambience; Photo by Igor Kasalovic, Unsplash |
Soon there will be an additional in-app
customer service channel.
So far we have a bunch of service channels,
most of them requiring the user to leave the app to
- Pick up the phone for a call
- Browse for self support
- Open up an additional chat window
- Take on the social media channels
- Move on to messenger applications
- How about getting into your car to get to a store?
- Etc.
And then a customer may be moving back and
forth between these channels with all the potential of losing track of the
incident status and the friction that cross channel customer service still
causes.
There is no doubt that providing in-app
support is the best possibility to offer fast issue resolution. It can provide
telemetry information from within the app, identify the user and therefore
provides a lot of relevant context that makes it easier for a service agent to
help the customer without unnecessary delays. The customers’ shift to
emphasize on the “Now” is also seen by Google research.
Not Every Device is a Smartphone
But what if the customer cannot pick up the
phone to engage in a typed conversation? The customer might be engaged in a VR
game, or driving a car, or in any number of situations without having a free
hand.
Maybe the customer simply doesn’t want to
pick up a phone?
What if the app doesn’t offer a user
interface at all beyond a little light that indicates ‘I am available’? This
would e.g. be the situation in an ambient environment that senses the presence
of a person and acts accordingly.
An environment like this would mainly be
voice and gesture controlled via devices like Amazon’s Alexa, Google Home,
Apple’s HomePod, or Microsoft’s upcoming Home Hub. Systems like these will
offer a keyboard as a secondary means to access service and support at best.
But there is no need to look that far out.
Imagine a gaming situation. Neither an Xbox, nor a Playstation, nor any other
major controller offers a keyboard. In case of these or a VR or AR game the
user would hold the controller and doesn’t have the leeway to get to a
keyboard.
So why would they offer a keyboard to
enable conversational (or other) in-app support? There is no reason.
Instead users will interact with the
service system and –agent via gesture-,
view- and speech based interfaces.
The next In-App Support Channel is Voice
Voice recognition technologies are maturing
rapidly and start to achieve human level accuracy in understanding at least
English language.
Human level of understanding lies at a word error rate (WER)
of about 5 per cent, which means that a human on average gets five out of
hundred words wrong due to misrecognition, missing a word, or falsely inserting
one.
Machines arrive at human level of language
understanding; source 2017 Internet Trends Report
While Amazon doesn’t give any numbers on
Alexa’s capabilities, Microsoft announced
it reached a word error rate (WER) of 5.9 per cent in October 2016; IBM
beat them to the punch with a WER of 5.5 percent in March 2017. And Google announced
in May 2017 that it reached a WER of 4.9 per cent.
Using speech recognition techniques on a
level like this, in combination with natural language processing (NLP) and
perhaps natural language generation (NLG), in-app service conversations can get
both, very personal, and very immersive.
And very effective. Speaking is the form of
communication that comes easiest to humans. Speaking and conversations are
tightly linked.
Last but not least it is also a very
efficient way of providing customer service as the human ability to exchange
information is highest when speaking.
Bringing Human Back To Customer Service
Right now customer service 2.0, the
automation of customer service, seems to move from call deflection via
self-service and then bots to customer service 3.0.
Helped by increasing maturity of NLP- and
intent detection technology this iteration will move the bots from the second row
into the front row. They will become the primary customer support interface. Instead
of using a search box on an FAQ, customers will ask a bot for help via a chat
interface. The bot itself then will be able to answer the question using e.g.
an FAQ or database, or escalate the question to a human operator. The
distinction between self-service and aided customer service will first blur,
then vanish. The machine takes care of the simpler problems, the human of the
more difficult ones.
With further refinement of Natural Language
Processing, text to speech-, speech to text technologies and intent detection,
typing will give way to speech again, then introducing customer service 4.0.
Customer service will fully turn into a
conversation and it will not matter anymore whether it is synchronous or
asynchronous.
It also will not matter anymore whether
customer service is delivered by a bot or a human, but will appear human. And
that might have an impact on the call center and its operation itself that we
will look at in a separate post.
The bottom line is: Customer service will
be humanized again.
Comments
Post a Comment