Experiments with HoloLens, Bot Framework, LUIS and Speech Recognition

Recently I had the opportunity to use a HoloLens device for some personal training and building some simple demos.

One of the scenarios that I find very intriguing is the possibility of integrating Mixed Reality and Artificial Intelligence (AI) in order to create immersive experiences for the user.

I decided to perform an experiment by integrating a Bot, Language Understanding Intelligent Services (LUIS), Speech Recognition and Mixed Reality via a Holographic 2D app.

The idea was to create a sort of “digital assistant” of myself that can be contacted using Mixed Reality: the first implementation contains only basic interactions (answering questions like “What are your favourite technologies” or “What’s your name”) but these could be easily be expanded in the future with features like time management (via the Graph APIs) or tracking projects status, etc.

Creating the LUIS application

To start, I created a new LUIS application in the portal with a list of intents that needed to be handled:

In the future, this could be further extended with extra capabilities.

After defining the intents and utterances, I trained and published my LUIS app to Azure and copied the key and URL for usage in my Bot:

Creating the Bot

I proceeded with the creation of the Bot using Microsoft Bot framework downloading the Visual Studio template and creating a new project:

The Bot template already defined a dialog named RootDialog so I extended the generated project with the classes required for parsing the JSON from the LUIS endpoint:

And then processed the various LUIS intents in RootDialog (another option is the usage of the LuisDialog and LuisModel classes as explained here):

Then, I tested the implementation using the Bot Framework Emulator:

And created a new Bot definition in the framework portal.

After that, I published it to Azure with an updated Web.config with the generated Microsoft App ID and password:

Since the final goal was the communication with an UWP HoloLens application, I enabled the Diret Line channel:

Creating the Holographic 2D app

Windows 10 UWP apps are executed on the HoloLens device as Holographic 2D apps that can be pinned in the environment.

I created a new project using the default Visual Studio Template:

And then added some simple text controls in XAML to receive the input and display the response from the Bot:

I decided to use the SpeechRecognizer APIs for receiving the input via voice (another option could be the usage of Cognitive Services):

The SendToBot() method makes use of the Direct Line APIs which permit communication with the Bot using the channel previously defined:

And then I got the app running on HoloLens and interfacing with a Bot using LUIS for language understanding and Speech recognition:

The source code of the project is available on GitHub here.

Happy coding!

Microsoft Bot Framework: showing a welcome message at the start of a new conversation

Recently I’ve worked on some projects related to Bot Framework and enjoyed the functionalities which permit to automate actions in response to user interactions.

It is important to provide the user with a great experience: one “nice touch” can be achieved by providing a welcome message at the beginning of a new conversation.

The first solution I tried was triggering the welcome message in the ConversationUpdate activity:

When I ran this, the message was presented twice in the Bot Framework Emulator:

After some investigation, I discovered that the ConversationUpdate activity is triggered both when the connection to the Bot is established and when a new user joins the conversation.

As explained on GitHub, the correct way to handle this case is by showing the welcome message only when a new user is added:

Using this approach the welcome message is displayed properly:

Happy coding!