Experiments with HoloLens, Bot Framework and LUIS: adding text to speech

Previously I blogged about creating a Mixed Reality 2D app integrating with a Bot using LUIS via the Direct Line channel available in the Bot Framework.

I decided to add more interactivity to the app by also enabling text to speech for the messages received by the Bot: this required the addition of a new MediaElement for the Speech synthesiser to the main XAML page:

Then I initialized a new SpeechSynthesizer at the creation of the page:

And added a new Speech() method using the media element:

When a new response is received from the Bot, the new Speech() method is called:

And then the recognition for a new phrase is started again via the MediaEnded event to simulate a conversation between the user and the Bot:

As usual, the source code is available for download on GitHub.

Microsoft Bot Framework: using a LuisDialog for processing intents

In the previous post, I blogged about integrating a Holographic 2D app with Bot Framework and LUIS.

I also spent some time going through these great samples for some presentations and found a very nice implementation for handling the Bot messages when LUIS intents are recognised.

Instead of using the exposed LUIS endpoint and parsing the returned JSON, the framework already provides a specific LuisDialog<> type which can be used for handling the various intents, in order to make the code cleaner and more extensible.

I’ve then modified the HoloLensBotDemo sample and added a new RootLuisDialog:

And this is all the code now needed for the Bot.

The updated source code is available on GitHub.

Happy coding!

 

Experiments with HoloLens, Bot Framework, LUIS and Speech Recognition

Recently I had the opportunity to use a HoloLens device for some personal training and building some simple demos.

One of the scenarios that I find very intriguing is the possibility of integrating Mixed Reality and Artificial Intelligence (AI) in order to create immersive experiences for the user.

I decided to perform an experiment by integrating a Bot, Language Understanding Intelligent Services (LUIS), Speech Recognition and Mixed Reality via a Holographic 2D app.

The idea was to create a sort of “digital assistant” of myself that can be contacted using Mixed Reality: the first implementation contains only basic interactions (answering questions like “What are your favourite technologies” or “What’s your name”) but these could be easily be expanded in the future with features like time management (via the Graph APIs) or tracking projects status, etc.

Creating the LUIS application

To start, I created a new LUIS application in the portal with a list of intents that needed to be handled:

In the future, this could be further extended with extra capabilities.

After defining the intents and utterances, I trained and published my LUIS app to Azure and copied the key and URL for usage in my Bot:

Creating the Bot

I proceeded with the creation of the Bot using Microsoft Bot framework downloading the Visual Studio template and creating a new project:

The Bot template already defined a dialog named RootDialog so I extended the generated project with the classes required for parsing the JSON from the LUIS endpoint:

And then processed the various LUIS intents in RootDialog (another option is the usage of the LuisDialog and LuisModel classes as explained here):

Then, I tested the implementation using the Bot Framework Emulator:

And created a new Bot definition in the framework portal.

After that, I published it to Azure with an updated Web.config with the generated Microsoft App ID and password:

Since the final goal was the communication with an UWP HoloLens application, I enabled the Diret Line channel:

Creating the Holographic 2D app

Windows 10 UWP apps are executed on the HoloLens device as Holographic 2D apps that can be pinned in the environment.

I created a new project using the default Visual Studio Template:

And then added some simple text controls in XAML to receive the input and display the response from the Bot:

I decided to use the SpeechRecognizer APIs for receiving the input via voice (another option could be the usage of Cognitive Services):

The SendToBot() method makes use of the Direct Line APIs which permit communication with the Bot using the channel previously defined:

And then I got the app running on HoloLens and interfacing with a Bot using LUIS for language understanding and Speech recognition:

The source code of the project is available on GitHub here.

Happy coding!

Microsoft Bot Framework: showing a welcome message at the start of a new conversation

Recently I’ve worked on some projects related to Bot Framework and enjoyed the functionalities which permit to automate actions in response to user interactions.

It is important to provide the user with a great experience: one “nice touch” can be achieved by providing a welcome message at the beginning of a new conversation.

The first solution I tried was triggering the welcome message in the ConversationUpdate activity:

When I ran this, the message was presented twice in the Bot Framework Emulator:

After some investigation, I discovered that the ConversationUpdate activity is triggered both when the connection to the Bot is established and when a new user joins the conversation.

As explained on GitHub, the correct way to handle this case is by showing the welcome message only when a new user is added:

Using this approach the welcome message is displayed properly:

Happy coding!

Microsoft Azure Multi-Factor Authentication (MFA): useful links

Today I did some research about Azure MFA and found these links that I would like to share:

Introduction to Azure Multi-Factor authentication, overview and get started

Azure MFA Pricing details

Enable mobile app authentication with Azure Multi-Factor Authentication Server

Secure terminal Services (RDP) using Azure Multi-factor Authentication (MFA) – Part 1 and Part 2

Step-By-Step: Securing a Xamarin app with Azure AD’s Multi-factor Authentication

Simon in the cloud: deploying your existing Silverlight application to the Windows Azure platform

Windows Azure July 2009 CTP is now available: you can download the Windows Azure Tools for Microsoft Visual Studio from here.

It’s now possible to associate an ASP.NET Web Application project in a Cloud Service solution as a Web Role: in this way it’s pretty simple to deploy a Silverlight Web application to Windows Azure without making any change.

To try this new feature, I’ve deployed the Silverlight version of Simon (I already blogged about this cool project by David J Kelley), you can try it here: http://azuretestapp.cloudapp.net/.

How to achieve this task?

First of all install the Windows Azure Tools for Microsoft Visual Studio July 2009 CTP and then request a token in order to create your Azure application (detailed information are available in this great tutorial).

Then, open your Silverlight solution and add a new “Cloud Service” project (I’ve named it “SimonAzureService“).

SilverlightAzureFig01

Inside this new project, right click the section “Roles“->”Add“->”Web Role Project in solution” and select your Silverlight Web application project (“SimonSilverlight.Web“).

SilverlightAzureFig02

Done! You can now right click the “Cloud Service” project and publish it using the Windows Azure portal.

SilverlightAzureFig03

A quick tip: if your Silverlight plug-in is hosted on a simple html page (Index.html, for example), you have to modify the web.config file and add the following setting in the <system.Webserver> section:

 <defaultDocument>
<files>
<add value=”Index.html”/>
</files>
</defaultDocument>

All the source code is available for download from the project site on codeplex.