Building an intelligent Node.js Chatbot with Azure Bot Service

Overview

Chatbots are quickly evolving from being little toys to becoming the initial touchpoints for new and existing customer interactions. Due to the significant advances in natural language processing, bots are becoming more and more “human-like” and it will become increasingly difficult to distinguish between bots and humans in the future. The rapidly emerging feature set of Cognitive Services in Azure is accelerating this evolution even further.
In this post, we will walk through building a chatbot based on Azure’s Bot Service using Node.js. We will look into:
  • Deploying a Node.js Chatbot on Azure Bot Service
  • Setting up Continuous Deployment with GitHub
  • Setting up a staging environment for testing
  • Creating a local development environment and testing the bot locally
  • Enabling local debugging with Visual Studio Code

Getting Started

Firstly, create a “Web App Bot” from the Azure Marketplace:
Select Bot Service
Select Bot Service
Next, we set the name and resource group for our new bot. In this case, we choose the “Node.js Language understanding” template, which will allow us to perform natural language processing out of the box.
Create Bot Service with the Node.js template
Create Bot Service with the Node.js template
Alternatively, you can deploy the bot as a “Function Bot”, which deploys a serverless instance of the bot. As of February 2018, this does not support deployment slots which makes handling a staging/dev and production environment a bit more cumbersome.
Let’s have a look at our resource group:
Our bot resources
Our bot resources
Here’s a quick summary of what each of these services does:
  • Web App Bot – This is our bot and most of our time will be spent here
  • App Service – The service on which our bot is running on
  • App Service Plan – The subscription level for our Bot Service, can be used to scale our bot up and/or out
  • Storage Account
  • Application Insights – Gives us details about the performance of our Bot Service
As long as our App Service Plan still has free resources, we could obviously deploy multiple Bot Services for sharing resources.

Testing our Bot in the Web Chat

In order to give our newly provisioned bot a quick test-drive, navigate into the Bot Service and select “Test in Web Chat”:
Test in Web Chat allows to test our bot directly in Azure
Test in Web Chat allows to test our bot directly in Azure
Our bot comes pre-configured with a simple language understanding model which is running in LUIS. For bots deployed in one of the Azure US regions, we can find the associated model by logging on at luis.ai. For bots deployed in one of the Azure EU regions, we can located the model at eu.luis.ai.
Before we actually start writing some code, let’s configure Continuous Deployment.

Setting up Continuous Deployment (CD)

Next, we’ll set up Continuous Deployment for our bot. For that, head to the Build section of the bot and download the source:
Configuring Continuous Deployment
Configuring Continuous Deployment
Extract the source and push it to a GitHub repository, you might want to run dos2unix on most files (ugh). Once committed, head to continue with “Step 3: Configure continuous deployment” and click Setup:
Selecting the deployment source
Selecting the deployment source
Now, we can link our Bot Service to our GitHub account, select the repository and branch:
Deployment GitHub configuration
Deployment GitHub configuration
If we navigate to our App Service, and select “Deployment options”, we can see that our deployment succeeded:
Initial deployment completed successfully
Initial deployment completed successfully
From here on, we can build out or embed our favorite Continuous Integration tools for automated testing, etc.

Adding a Staging Deployment Slot

So far, we have one production version of our bot running, pulling from the master branch. Obviously, adding a staging deployment might be useful, especially when keeping Continuous Integration in mind. For that, we can navigate to our associated App Service, select “Deployment Slots” and add a new slot:
Adding a new Deployment Slot
Adding a new Deployment Slot
Initially, it might be easiest to clone the configuration of the production slot. Over time, it most likely makes more sense to also decouple the language model from LUIS and use a separate staging model:
Configuring our new Deployment Slot
Configuring our new Deployment Slot
After selecting the GitHub repository and the branch, we’re good to go. Keep in mind that some of the slot settings for production should be marked as “Slot settings”, so that they do not get swapped when promoting staging to production.
Currently it looks like new deployment slot are not registered in Azure as fully-fledged Bot Services, but rather as a normal Web App Node.js runtime. Luckily, this should not make a difference for automated testing or using the service, as the exposed bot endpoint is still the same.

A more simple starting example

In order to simplify our local development setup, we can rely on a stripped down example for our bot’s app.js:
var restify = require('restify');
var builder = require('botbuilder');
var botbuilder_azure = require("botbuilder-azure");

// Setup Restify Server
var server = restify.createServer();
server.listen(process.env.port || process.env.PORT || 3978, function() {
    console.log('%s listening to %s', server.name, server.url);
});

var useEmulator = (process.env.NODE_ENV == 'development');
var connector = useEmulator ? new builder.ChatConnector() : new builder.ChatConnector({
    appId: process.env.MicrosoftAppId,
    appPassword: process.env.MicrosoftAppPassword,
    openIdMetadata: process.env.BotOpenIdMetadata
});

// Listen for messages from users
server.post('/api/messages', connector.listen());

var bot = new builder.UniversalBot(connector);
var inMemoryStorage = new builder.MemoryBotStorage();
bot.set('storage', inMemoryStorage);
var luisAppId = process.env.LuisAppId;
var luisAPIKey = process.env.LuisAPIKey;
var luisAPIHostName = process.env.LuisAPIHostName || 'westus.api.cognitive.microsoft.com';
const LuisModelUrl = 'https://' + luisAPIHostName + '/luis/v1/application?id=' + luisAppId + '&subscription-key=' + luisAPIKey;

// Main dialog with LUIS
var recognizer = new builder.LuisRecognizer(LuisModelUrl);
var intents = new builder.IntentDialog({
        recognizers: [recognizer]
    })
    .matches('Greeting', (session) => {
        session.send('You reached Greeting intent, you said \'%s\'.', session.message.text);
    })
    .matches('Help', (session) => {
        session.send('You reached Help intent, you said \'%s\'.', session.message.text);
    })
    .matches('Cancel', (session) => {
        session.send('You reached Cancel intent, you said \'%s\'.', session.message.text);
    })
    .onDefault((session) => {
        session.send('Sorry, I did not understand \'%s\'.', session.message.text);
    });
bot.dialog('/', intents);

Setting up a local development environment

Running our bot locally is pretty straight forward, as it is just a Node.js application which is using the Bot Framework. However, as our bot is based on LUIS for natural language understanding, we need to set our LUIS endpoint hostname and keys in the environment. This is because LUIS is not available locally, thus we need to rely on its cloud-service also during development:
$ npm install
$ export NODE_ENV=development
$ export LuisAppId=<Your LUIS Application Id>
$ export LuisAPIKey=<Your LUIS API Key>
$ export LuisAPIHostName=<Your LUIS Endpoint Hostname>
$ node app.js
restify listening to http://[::]:3978
We can find our settings under our Bot Service in “Application Settings”:
Retrieving our LUIS details
The easiest way to interact with our bot locally is by using the Bot Framework Emulator. The Bot Framework Emulator allows us to test the bot locally, similar to the “Test in Web Chat” functionality in the Azure Portal. Once our bot is running, we can connect to our bot via:
http://localhost:3978/api/messages
Bot Framework Emulator in action
Bot Framework Emulator in action
If we want to take it one step further and directly debug our code in e.g., Visual Studio Code. To do this, open your bot source code folder in Code, enter the debug mode and add a new “Launch Configuration”:
New Launch Configuration
New Launch Configuration
We can use the following Launch Configuration to load our required environment variables from a file called config.env:
 
  "version":"0.2.0",
  "configurations": 
     
      "type":"node",
      "request":"launch",
      "name":"Launch Program",
      "program":"${workspaceFolder}/app.js",
      "envFile":"${workspaceFolder}/config.env"
    }
  ]
}
Lastly, let’s create a our config file and make sure we don’t check it into git by accident:
$ touch config.env
$ echo "NODE_ENV=development" >> config.env
$ echo "LuisAppId=<Your LUIS Application Id>" >> config.env
$ echo "LuisAPIKey=<Your LUIS API Key>" >> config.env
$ echo "LuisAPIHostName=<Your LUIS Endpoint Hostname>" >> config.env
$ echo "config.env" >> .gitignore
Finally, we can start debugging our bot directly from Visual Studio Code and use the Bot Framework Emulator to interact it:
Debugging our Bot
Debugging our Bot
Last but not least, we can commit our .vscode/launch.json configuration to git.

Summary

In this post, we looked into how we can quickly deploy an intelligent Chatbot using Node.js and Azure Bot Service. Initially, we used the “Node.js Language Understanding” template to have a ready-to-go setup. Next, we configured GitHub for Continuous Deployment and showed how local development, testing and debugging is easily possible with Visual Studio Code.
If you have any questions, feel free to reach out to me on Twitter @clemenssiebler.

One thought on “Building an intelligent Node.js Chatbot with Azure Bot Service

Leave a Reply

Your email address will not be published. Required fields are marked *