Chatbots Magazine Become a Writer Intro to Chatbots

How/when to loop in human agent during a hybrid approach


(walker) #1

A hybrid form of customer service is one of the most popular applications of chatbots and something that’s been discussed several times on this forum. I have a two questions in this regard.

The first is: how do you prefer to make your human agents aware of a conversation that they need to take over from the bot? Is it based on the customers’ responses that reveal some sort of distress/dissatisfaction–or even a button they can press (“Talk to human”)? Is it if your bot can’t match a response to one of its intents or loops through an attempted flow a few times with no resolution?

And secondly: what are the mechanics of that human intervention? How is the human agent made aware of a “flagged” conversation and from where does that human respond? You could respond via command line, and there are some platforms with larger core functionalities that could also be used for this purpose, such as Slack or FrontApp. But I’m wondering if any of you know of alternative solutions that are more narrowly dedicated to this area (something perhaps like Dashbot)?


(Thomas Kapp) #2

I think this is a critical area for the current, not-so-smart bots. Whenever the bot gets lost and cannot help the user, a human clearly needs to step in.

From some trials with our bots and from playing around with other bots my gut feeling is this: users need to be made aware once their conversation is being handed over to a human, whether this is by pressing the mentioned “Talk to human” button or by the bot saying “I think I can’t help you with this, can I hand over to my human colleague?”. Which one is best needs to be tested.

If this handover happens without the user knowing, it doesn’t feel right. The same is true if the handover takes a considerable amount of time without the user being informed it will. I had this experience with the FB bot of a watchmaker: the bot gave some inconclusive answer and several hours later a human answered my question more or less out-of-the-blue.

So in summary my take on this is: if the bot gets stuck for some reasons (for example giving the same standard answer more than once) a handover, clearly communicated to the user can take place. The human answer has to follow immediately or, if this is not possible, the system has to let the user know when a human will step in.

In enterprise environments, the handover needs to be built into whichever helpdesk software a company uses. I am not aware of any solutions on the market yet - but will be curious to learn about them here.


(Tony Lucas) #3

In regards to when, you are on the right lines, our customers typically use the ability to detect when the bots got stuck to trigger something (e.g assigning a request to a human), or they can be assigned automatically when getting to a certain point in a conversation (e.g you’ve collected some information and now know who best to route the request to).

I agree in general that the end user should be made aware it’s being transferred to a human, it will temper their expectation on response time, but perhaps also temper their language :slight_smile:

In regards to then escalating to helpdesk tools, some of these cope better than others, depending on what API/integration options are available. We’ve done Salesforce, Smooch & Intercom, which all handle this in one way or another. We’ve also got a Front integration on the roadmap, so tools like that are worth looking at as well.

As an aside, one of the cool modules we built was the ability to customise the response depending on day/time, so if it was out of office hours it could respond differently, which has proven very useful.


(walker) #4

Interesting. What about a more fluid hybrid scenario whereby it’s not necessarily a single hand-off from the bot to a human agent but rather an “overseeing” human that dips in and out when needed? Would it be sufficient to make the customer aware of this set-up from the onset?


(walker) #5

Thanks for your perspective. What I found was that Salesforce and Front seemed better for slower, “ticketing” responses, whereas I was looking for something a little bit more like an all-seeing dashboard built for speed and scale. Intercom might be closest.


(Tony Lucas) #6

Yeah I think it’s fair to say Intercom is more IM focussed, as standard Salesforce is more ticket focussed (we built our own messaging module for it). Front is a bit of a curious mix, but leaning more towards a ticket focus.


(Thomas Kapp) #7

I guess the hand-off depends on the number of conversations running. For (the currently) low volume, the overseeing human seems totally reasonable. Once bots take over call-center like functions, this might no longer work. But the idea is surely working for small-volume bots with high-quality conversations. We will play around with it a little, also with Intercom thanks to Tony’s post.


(l.r.henrickson) #8

I’m a bit surprised that no one has mentioned the potential ethical issues with human intervention. Perhaps the ethical issues are a bit less drastic for reference/retail chatbots, but for chatbots that address more sensitive matters (like Joy, a mental health Facebook Messenger chatbot [https://www.facebook.com/hellojoyai] the ethical issues of invention become more explicit.
I agree with walker’s approach of allowing customers/users to provide explicit consent to move the conversation from bot to human, but perhaps there could be a disclaimer when the user first begins using the chatbot so that a switch from bot to human could be more seamless? I’m not sure.
I’m really interested in this conversation to see how the ethical issues could be overcome, while at the same time maintaining postive user experience. :slight_smile:


(szesze20) #9

does your work involve Einstein AI of Salesforce?


(Vik Kimyani) #10

I’m seeing this thread a bit late :slight_smile: but I’ve come across this a few times, on the bots where we do a handoff we provide a chat log to Oracle Service Cloud and inform the user that the bot needs help from an agent. The integration is done with a bit of node and not that tricky.

It can also be a human overseer that steps in as needed.

If using some other service the most useful thing for the agent is to have the chat history and for the user to know the bot is stuck or at its limit and needs help from a person.

It’s interesting to also provide the ability to ask the user’s consent, it makes sense in some contexts.