Chatbots Magazine Become a Writer Intro to Chatbots

DevOps for Chatbots: How?

I’m currently building a bot that has a centralized backend, and is running on both Facebook Messenger and Slack. It’s a relatively small project, but I’m really struggling to have separate Prod/QA/Dev stages of my bot. I’m using API.ai for my NLP service, which we train over time.

I’m curious if anyone has experience building enterprise chatbots, and what are the best strategies for testing, continuous integration, version controlling training sets, and DevOps in general. Would love to hear the communities thoughts.

1 Like

This is a genius idea! I am going to totally do this for my company! Thanks so much for this :smiley: I was going to build a whole bunch of build scripts and other admin tools but this just makes the most sense!

@jburchett I’m confused, what are you going to do?

DevOps related tasks. Instead of making more build scripts and admin tools to make life easier for restarting the server, pushing code, etc. I am going to make a Slack bot now I can just talk to that will do it for me.

So instead of having to ssh into a server all i’ll have to do is just tell the bot to do it.

1 Like

I see. My main question is about having different production/qa/dev stages for a bot. If I have a bot out in the public, I don’t want to make changes to my API.ai training (that might break something) that the production environment is pointing to. Do you have any thoughts on the best way to have a distinct production environment, separate from your testing environment? All of my resources are deployed to AWS Lambdas.

Oh! Gotcha. So right now my chat bots support Telegram, Kik and Facebook Messenger. The way I handle the different environments is using property files. I have a bunch of development scripts that launch the server using the properties file that contains all the info about a particular server.

On Facebook Messenger I create two bots, one for the test and another for production. The test one is restircted to people who are added as “Testers” in the facebook app, while the other bot after passing the submission process is avaliable to everyone.

On Kik I create two bots, one that is for test and by default all kik bots are not visible until they pass the submission process. Then I have a live kik bot that has passes submission process

On Telegram I have created two bots one for test and one for production. Sadly there is no way to stop both bots from being visible so what I do for the test one is make it so requires a special command to be performed in order to activate the bot… This command only the devs would know.

That is my current process and trying to refine it as I go.

1 Like

@jburchett That’s awesome. For Facebook Messenger, did you try doing “Create Test App” under your production app, or did you create a completely separate bot (separate Page) for your test bot?

Yes to both those things! I use test apps for the bot and always unpublish the test Facebook page. Add my devs as admins to the test page so they can be the only ones to see the page and bot.

I was having issues actually finding my test bot once I created it in the Facebook developer portal. Like how do I start chatting with the test version of my bot? If don’t know where to find the “Testing” page, I just ended up creating a completely separate bot just for testing, with a different page and everything.

Exactly, that is what I did as well. Each bot needs to have a page linked to it. So you would essentially have a bot for each phase of development. Test, Stage, and production.

I wish they made it easier… But that’s Facebook for yah!

This is why we have moved away from wit.ai and really all other 3rd party services. We have several enterprise clients and this was huge for them. So now all our conversations are stored as serialized objects, same goes for all the NLP training, so they can be revision and pushed to any environment very easily. One of the bots we are building currently has 60k+ responses, and only takes seconds to deploy any updates. All the NLP is loaded into memory on deploy during the jenkins process.

We also have tools that can move mongo/redis/rabbitmq down the prod->stage->dev chain. All have unique webhook url’s that then can be pushed out for QA.

Another cool tool we use is our playback tool. Since we store every webhook request that hits our servers we can easily play a conversation and detect any errors. We do this for any major that gets released into prod. Doesn’t help us on new functionality though.

2 Likes

Interesting that you moved away from Wit.ai. So all of your NLP is in house?

Yeah sorta, wit.ai just wasn’t working with what we needed to do. Their move to stories was a step in the right direction but was clunky at best a couple months ago so we rolled our own with the help of open source NLP libraries.

That said Wit.ai was pretty nice to train though and for free text scenarios it worked great, I still tool around with it and follow how it’s progressing.

I should also say we have bots in the wild that are still using it today because they were built with that integration in place. :slight_smile:

1 Like

same here! i am completely new to this and found it odd… why create 2 of the same in the manner

You create two because you need each bot to point to the correct server. It wouldn’t be a good idea for your live and test bot to pull from the same server.

Does that mean i cant develop a bot on localhost? I have to have two live servers?

Here is what we have done at Success.ai in terms of DevOps. (stay tuned for a blog post detailing our engineering practices)

We have decoupled our Webhooks from our Message processing engine (which in your case is API.ai, Wit.ai, LUIS or custom) and have a Kafka cluster sitting in between. This allow us to change, re-deploy, reconfigure our message processing engine without losing messages or having our WebHooks or servers go down. During a deployment message processing engine will be temporarily unavailable (for seconds) as previous version goes down and new version goes up. In case of issues the rollback is also handled the same way … new version goes down , old version is back up and listening to the queue. (Kafka is awesome for this, more on this in our blog post)

Training models, data, intents, etc… are all in Git and version controlled. We have separate versions for Dev,QA,Staging, Production.

All keys and configs are also in Git and versioned. (How to implement this depends on the programming language you are working with)

For Facebook we don’t have a test version of our app. Each developer creates their own Facebook page and App, The team uses NGROK and change web hook URL to point to NGROK on their machine for their own app. Use their own keys and data.

All incoming input messages and our NLP results are persisted as well as session content. This allow us to replay a scenario and investigate issues quickly.

3 Likes