Analysing .NET Core applications using NDepend

I previously blogged about NDepend and its capabilities to perform static code analysis and obtain in-depth information about code metrics via its CQLinq features.

I have used in the past NDepend to analyse codebases of .NET apps with great results, always obtaining advanced suggestions on how to improve the code and often highlighting issues that could have prevented maintainability.

Recently, I was contacted by Patrick Smacchia, creator of NDepend, for providing some feedback about the last release, so I thought it was a good opportunity to dig more in-depth about the latest features available in the v2018.1 public release https://www.ndepend.com/whatsnew.

I was very pleased to verify the support available for .NET Core, including .NET Core 2.1 Preview.

To test it, I downloaded a trial of NDepend from here and installed on my system running Windows 10 April 2018 update with Visual Studio 2017 Version 15.7.0: all worked well and got the NDepend menu appearing when launching the IDE.

Exploring new .NET Core 2.1 APIs

Since I tried recently .NET Core 2.1.0 preview 2, I explored the differences between this version and version 2.0.7 available on my machine under the folder C:\Program Files\dotnet\shared\Microsoft.NETCore.App: using the NDEPEND->Diff menu I selected the assemblies related to the two versions

And I was able to obtain a list of the new public methods available in the latest release:

Then a detailed view of other metrics using the NDepend dashboard including a complex dependency map:

It’s possible to extract numerous information related to the differences between the two .NET Core versions including assemblies, public methods, types, fields added/removed plus more.

A useful article available on the NDepend blog explains the different options available and additional in-depth articles are referenced below:

As a next step, I created a new blank ASP.NET Core Web Application and ran a code analysis and obtained a list of queries and rules starting from the NDepend dashboard:

It is possible to discover a variety of rules, object-oriented design, breaking changes, naming conventions, source file organisation, architecture improvements and more to enhance the codebase of the .NET Core app: this is a real time saver when exploring new codebases or performing code reviews.

A comprehensive list of the new features available in NDepend v2018.1 is available here, including support for .NET Core and  DDD ubiquitous language check as explained in depth on the official blog.

Happy coding!

Analysing visual content using HoloLens, Computer Vision APIs, Unity and the Windows Mixed Reality Toolkit

In these days, I’m exploring the combination of HoloLens/Windows Mixed Reality and the capabilities offered by Cognitive Services to analyse and extract information from images captured via the device camera and processed using the Computer Vision APIs and the intelligent cloud.
In this article, we’ll explore the steps I followed for creating a Unity application running on HoloLens and communicating with the Microsoft AI platform.

Registering for Computer Vision APIs

The first step was to navigate to the Azure portal https://portal.azure.com and create a new Computer Vision API resource:

I noted down the Keys and Endpoint and started investigating how to approach the code for capturing images on HoloLens and sending them to the intelligent cloud for processing.

Before creating the Unity experience, I decided to start with a simple UWP app for analysing images.

Writing the UWP test app and the shared library

There are already some samples available for Cognitive Services APIs, so I decided to reuse some code available and described in this article here supplemented by some camera capture UI in UWP.

I created a new Universal Windows app and library (CognitiveServicesVisionLibrary) to provide, respectively, a test UI and some reusable code that could be referenced later by the HoloLens experience.

The Computer Vision APIs can be accessed via the package Microsoft.ProjectOxford.Vision available on NuGet so I added a reference to both projects:

The test UI contains an image and two buttons: one for selecting a file using a FileOpenPicker and another for capturing a new image using the CameraCaptureUI. I decided to wrap these two actions in an InteractionsHelper class:

I then worked on the shared library creating a helper class for processing the image using the Vision APIs available in Microsoft.ProjectOxford.Vision and parsing the result.

Tip: after creating the VisionServiceClient, I received an unauthorised error when specifying only the key: the error disappeared by also specifying the endpoint URL available in the Azure portal.

I then launched the test UI, and the image was successfully analysed, and the results returned from the Computer Vision APIs, in this case identifying a building and several other tags like outdoor, city, park: great!

I also added a Speech Synthesizer playing the general description returned by the Cognitive Services call:

I then moved to HoloLens and started creating the interface using Unity, the Mixed Reality Toolkit and UWP.

Creating the Unity HoloLens experience

First of all, I created a new Unity project using Unity 2017.2.1p4 and then added a new folder named Scenes and saved the active scene as CognitiveServicesVision Scene.

I downloaded the corresponding version of the Mixed Reality Toolkit from the releases section of the GitHub project https://github.com/Microsoft/MixedRealityToolkit-Unity/releases and imported the toolkit package HoloToolkit-Unity-2017.2.1.1.unitypackage using the menu Assets->Import Package->Custom package.

Then, I applied the Mixed Reality Project settings using the corresponding item in the toolkit menu:

And selected the Scene Settings adding the Camera, Input Manager and Default Cursor prefabs:

And finally set the UWP capabilities as I needed access to the camera for retrieving the image, the microphone for speech recognition and internet client for communicating with Cognitive Services:

I was then ready to add the logic to retrieve the image from the camera, save it to the HoloLens device and then call the Computer Vision APIs.

Creating the Unity Script

The CameraCaptureUI UWP API is not available in HoloLens, so I had to research a way to capture an image from Unity, save it to the device and then convert it to a StorageFile ready to be used by the CognitiveServicesVisionLibrary implemented as part of the previous project.

First of all, I enabled the Experimental (.NET 4.6 Equivalent) Scripting Runtime version in the Unity player for using features like async/await. Then, I enabled the PicturesLibrary capability in the Publishing Settings to save the captured image to the device.

Then, I created a Scripts folder and added a new PhotoManager.cs script taking as a starting point the implementation available in this GitHub project.

The script can be attached to a TextMesh component visualising the status:

Initialising the PhotoCapture API available in Unity https://docs.unity3d.com/Manual/windowsholographic-photocapture.html

Saving the photo to the pictures library folder and then passing it to the library created in the previous section:

The code references the CognitiveServicesVisionLibrary UWP library created previously: to use it from Unity, I created a new Plugins folder in my project and ensured that the Build output of the Visual Studio library project was copied to this folder:

And then set the import settings in Unity for the custom library:

And for the NuGet library too:

Nearly there! Let’s see now how I enabled Speech recognition and Tagalong/Billboard using the Mixed Reality Toolkit.

Enabling Speech

I decided to implement a very minimal UI for this project, using the speech capabilities available in HoloLens for all the interactions.

In this way, a user can just simply say the work Describe to trigger the image acquisition and the processing using the Computer Vision API, and then naturally listening to the results.

In the Unity project, I selected the InputManager object:

And added a new Speech Input Handler Component to it:

Then, I mapped the keyword Describe with the TakePhoto() method available in the PhotoManager.cs script already attached to the TextMesh that I previously named as Status Text Object.

The last step required to enable Text to Speech for receiving the output: I simply added a Text to Speech component to my TextMesh:

And enabled the speech in the script using StartSpeaking():

I also added other two components available in the Mixed Reality Toolkit: Tagalong and Billboard to have the status text following me and not anchored to a specific location:

I was then able to generate the final package using Unity specifying the starting scene:

And then I deployed the solution to the HoloLens device and started extracting and categorising visual data using HoloLens, Camera, Speech and the Cognitive Services Computer Vision APIs.

Conclusions

The combination of Mixed Reality and Cognitive Services opens a new world of experiences combining the capabilities of HoloLens and all the power of the intelligent cloud. In this article, I’ve analysed the Computer Vision APIs, but a similar approach could be applied to augment Windows Mixed Reality apps and enrich them with the AI platform https://www.microsoft.com/en-gb/ai.

The source code for this article is available on GitHub: https://github.com/davidezordan/CognitiveServicesSamples

 

Exploring Windows Mixed Reality, switching between 2D / 3D and embedding Web Views

In these days, I’m exploring the options for switching between Unity 3D and XAML 2D views integrating the access to UWP APIs for content hosted on Web Views.

This scenario could be particularly useful if an app needs to reuse existing code, perhaps available in a website, with the requirement to access the Windows Runtime when executed in Windows Mixed Reality devices and activated from a Unity 3D scene.

Creating the Unity project

To start, I created a new Unity scene and imported the HoloToolkit package downloaded from here.

I applied the Scene and Project settings from the HoloToolkit|Configure menu and added the following prefabs available from the imported package:

  • HoloLens Camera
  • InputManager
  • Cursor

Then, I added a simple cube which, when gazed and air-tapped, permits to trigger the view switching.

I defined a new script TapBehaviour to capture the event and call the 2D view:

The interesting part here is the one using the AppViewManager for switching views: after some research, I decided to use this class from GitHub as a starting point for handling the logic via CoreApplicationView.

I created a new script in Unity (AppViewManager.cs) and added the source code from the GitHub repository.

In this way, it is possible to retrieve the list of registered views or a specific one using for instance

And then simply transition to the new page by calling the Switch() or SwitchAsync() methods.

Creating and registering the 2D views

The 2D view named ContentPage needed to be defined and registered in the UWP project generated by Unity using the File|Build Settings|Build command after setting the UWP Build type to XAML to be able to define additional views in my project

Then, I opened the project in Visual Studio and added a new page called ContentPage.xaml containing a WebView and a button to switch back to Unity 3D

And the code makes use again of the AppViewManager previously imported in Unity:

We are now making use of two pages MainPage and ContentPage, respectively hosting the 3D and 2D content. Before using them, I needed to register both in the App.cs class right after initialising the Unity player:

Accessing UWP APIs from the hosted web page

The Web View is actually showing content from this web page hosted in my personal web-site which tries to access the UWP APIs if hosted with elevated permissions:

To enable Windows Runtime access, I modified the Package.appxmanifest and added the following under the Application section:

After this step, I was able to launch the app using an HoloLens device or emulator:

And then switch to the 2D view hosting the web page accessing UWP APIs on HoloLens by tapping the cube:

And return to the Unity 3D view using the “Click me” button.

As usual, the source code is available for download on GitHub.

Experiments with HoloLens, Bot Framework and LUIS: adding text to speech

Previously I blogged about creating a Mixed Reality 2D app integrating with a Bot using LUIS via the Direct Line channel available in the Bot Framework.

I decided to add more interactivity to the app by also enabling text to speech for the messages received by the Bot: this required the addition of a new MediaElement for the Speech synthesiser to the main XAML page:

Then I initialized a new SpeechSynthesizer at the creation of the page:

And added a new Speech() method using the media element:

When a new response is received from the Bot, the new Speech() method is called:

And then the recognition for a new phrase is started again via the MediaEnded event to simulate a conversation between the user and the Bot:

As usual, the source code is available for download on GitHub.

Microsoft Bot Framework: using a LuisDialog for processing intents

In the previous post, I blogged about integrating a Holographic 2D app with Bot Framework and LUIS.

I also spent some time going through these great samples for some presentations and found a very nice implementation for handling the Bot messages when LUIS intents are recognised.

Instead of using the exposed LUIS endpoint and parsing the returned JSON, the framework already provides a specific LuisDialog<> type which can be used for handling the various intents, in order to make the code cleaner and more extensible.

I’ve then modified the HoloLensBotDemo sample and added a new RootLuisDialog:

And this is all the code now needed for the Bot.

The updated source code is available on GitHub.

Happy coding!

 

Experiments with HoloLens, Bot Framework, LUIS and Speech Recognition

Recently I had the opportunity to use a HoloLens device for some personal training and building some simple demos.

One of the scenarios that I find very intriguing is the possibility of integrating Mixed Reality and Artificial Intelligence (AI) in order to create immersive experiences for the user.

I decided to perform an experiment by integrating a Bot, Language Understanding Intelligent Services (LUIS), Speech Recognition and Mixed Reality via a Holographic 2D app.

The idea was to create a sort of “digital assistant” of myself that can be contacted using Mixed Reality: the first implementation contains only basic interactions (answering questions like “What are your favourite technologies” or “What’s your name”) but these could be easily be expanded in the future with features like time management (via the Graph APIs) or tracking projects status, etc.

Creating the LUIS application

To start, I created a new LUIS application in the portal with a list of intents that needed to be handled:

In the future, this could be further extended with extra capabilities.

After defining the intents and utterances, I trained and published my LUIS app to Azure and copied the key and URL for usage in my Bot:

Creating the Bot

I proceeded with the creation of the Bot using Microsoft Bot framework downloading the Visual Studio template and creating a new project:

The Bot template already defined a dialog named RootDialog so I extended the generated project with the classes required for parsing the JSON from the LUIS endpoint:

And then processed the various LUIS intents in RootDialog (another option is the usage of the LuisDialog and LuisModel classes as explained here):

Then, I tested the implementation using the Bot Framework Emulator:

And created a new Bot definition in the framework portal.

After that, I published it to Azure with an updated Web.config with the generated Microsoft App ID and password:

Since the final goal was the communication with an UWP HoloLens application, I enabled the Diret Line channel:

Creating the Holographic 2D app

Windows 10 UWP apps are executed on the HoloLens device as Holographic 2D apps that can be pinned in the environment.

I created a new project using the default Visual Studio Template:

And then added some simple text controls in XAML to receive the input and display the response from the Bot:

I decided to use the SpeechRecognizer APIs for receiving the input via voice (another option could be the usage of Cognitive Services):

The SendToBot() method makes use of the Direct Line APIs which permit communication with the Bot using the channel previously defined:

And then I got the app running on HoloLens and interfacing with a Bot using LUIS for language understanding and Speech recognition:

The source code of the project is available on GitHub here.

Happy coding!

Microsoft Bot Framework: showing a welcome message at the start of a new conversation

Recently I’ve worked on some projects related to Bot Framework and enjoyed the functionalities which permit to automate actions in response to user interactions.

It is important to provide the user with a great experience: one “nice touch” can be achieved by providing a welcome message at the beginning of a new conversation.

The first solution I tried was triggering the welcome message in the ConversationUpdate activity:

When I ran this, the message was presented twice in the Bot Framework Emulator:

After some investigation, I discovered that the ConversationUpdate activity is triggered both when the connection to the Bot is established and when a new user joins the conversation.

As explained on GitHub, the correct way to handle this case is by showing the welcome message only when a new user is added:

Using this approach the welcome message is displayed properly:

Happy coding!

Using Prism modularization in Xamarin.Forms

Recently, Prism for Xamarin.Forms 6.2.0 has been released with many notable improvements including a new bootstrapping process, AutoWireViewModel behaviour changes, deep-linking support,  modularity and Prism template pack enhancements (full release notes available here).

Today, I fired up Visual Studio to have a play with this new version and decided to try the Xamarin.Forms support for Prism Modules: this is a very useful feature which allows to separate clearly the various parts of the application and improve quality, extensibility and readability of the code.

After downloading the new template pack, I created a new solution selecting New Project => Templates => Visual C# => Prism => Forms => Prism Unity App:

PrismTemplatePack

The new wizard is very useful and permits to select the platforms to target in the project: I selected Android, iOS and UWP and the project was generated targeting the three platforms with a shared PCL. NuGet packages were already updated to the latest version so no need for further actions.

While exploring the new project structure and the new modularization stuff, I decided to create a new Xamarin.Forms portable class library containing a module with a single View/ViewModel (SamplePage SamplePageViewModel) visualised when a user interacts with a button on the home page.

The new module required the definition of the following class implementing the Prism IModule interface:

To keep the logic separated from the rest of the app, I decided to register the navigation type for SamplePage inside the Initialize() method of the module which will be triggered when the module loads.

I also applied Xamarin.Forms XAML compilation to the module to improve performance, which is always great to have 😉

It’s worth noting that in this new Prism release the default value for the attached property ViewModelLocator.AutowireViewModel is set to true, so we can omit it and the framework will automatically associate SampleViewModel as the BindingContext for the view:

I then explored the new breaking changes for the bootstrapping process: the application class now needs to inherit from the PrismApplication class and two new virtual methods OnInitialized() and RegisterTypes() permit respectively to specify the implementation for navigating to the home page and registering the known types / ViewModel’s for navigation:

The third overridden method, ConfigureModuleCatalog(), informs the app to initialize the catalog with the module we created previously and set the initialization mode to OnDemand which means the module is not activated when the application starts, but it must load explicitly. This feature is particularly useful in cases in which some functionalities of the app must initialise after some other requirements like, for instance, successful authentication or applications extended via modules.

The sample view was in place, so I proceeded with the implementation of the HomePage: I wrapped the existing one in a NavigationPage to allow the correct back stack and then created two commands for initializing the module and navigating to the SamplePage defined previously:

and the corresponding ViewModel:

The module is initialized by injecting the Prism ModuleManager and then calling the LoadModule() method:

The navigation to the page defined in the module is performed by:

The property IsSampleModuleRegistered permitted to control the CanExecute() for the button commands using this nice fluent syntax using ObservesProperty(()=>….) available in Prism:

Here we go: I found the Prism implementation in this new version very stable and working well with Xamarin.Forms. The modularization capabilities are useful to write clean, maintainable and extensible apps.

The source code is available for download as part of the official Prism samples on GitHub.

Looking forward to exploring all the other capabilities available in Prism and Xamarin.Forms. Happy coding!

Having fun with Xamarin.Forms and Multi-Touch Behaviors

Recently Xamarin has released preview support for the Universal Windows Platform in their Xamarin.Forms framework so I have been playing around with version 2.0 for testing its features and verify how easy is to target multiple platforms (iOS, Android, Windows 10 UWP, Windows Phone, Windows 8.1) with a single codebase.

One of the experiments I have done is related to custom multi-touch gestures: the idea to use a XAML Behavior is a common scenario to write well structured code so I started creating a new Cross-Platform Xamarin.Forms Portable project and upgraded the NuGet packages to the latest stable version of the framework (currently v2.0.1.6505).

I have then read the official documentation and analysed the samples available on GitHub: a very good example is the PinchGesture one so, starting from it, I created a new MultiTouchBehavior implementing this gesture and attached the same Behavior to an Image object added to a sample ContentPage, as described below in this lovely cross-platform XAML 🙂

The BindingContext=”{Binding}” is used to trigger the BindingContextChanged event and initialise correctly the GestureRecognizers  for the parent object since the AssociatedObject.Parent is set to null when the Behavior.OnAttachedTo() is called (I suppose that the XAML tree is not yet completely created when the behavior is attached in Xamarin.Forms):

Here is the project deployed to the Android, iOS and Windows 10 emulators:

MultiTouch_Android_Emulator   Simulator Screen Shot 14 Feb 2016, 18.25.00   MultiTouch_Windows10_Emulator

I’ve been particularly impressed by Xamarin.Forms: the possibility to target so many platforms using the same code is a killer feature and the development environment is also very comfortable to use.

The sample code used in this post is available on GitHub, I’m planning to add more functionalities in the future with particular regard to other common multi-touch gestures using Xamarin.Forms and XAML.

NDepend v6 available

Version 6 of NDepend is now available for download from the official site.

New features include:

  • enhanced Visual Studio integration;
  • support for Visual Studio 2015;
  • rule files shareable amongst projects;
  • default rules description and HowToFix;
  • default rules less false positives;
  • colored code metric view;
  • intuitive display of code coverage percentage;
  • compiler generated code removal;
  • async support;
  • analysis enhancements;
  • support for Visual Studio Blue, Dark, Light themes;
  • support for high DPI resolution;
  • integration with TFS, SonarQube and TeamCity.

A detailed description of the new capabilities is available here.