A Dashboard for Microsoft Orleans

Introduction

While Microsoft Orleans has developed into a robust and scalable product over the years, the engagement between the Orleans team and the community around it is a spectacular example of the collaboration that open source projects should foster. On the one hand, the Orleans team members make themselves very available to help out with questions and issues that developers have when using Orleans. On the other hand, developers using Orleans have together built OrleansContrib, a collection of repositories adding unofficial functionality on top of what Orleans provides.

These repositories cover a variety of different areas: storage providers, logging and telemetry, documentation on design patterns, virtual meetups… and, of course, the Orleans Dashboard. This dashboard is a great way to monitor the activity of your silos and the grains within them.

Dashboard Overview

At a glance, the dashboard gives you a brief summary of your active silos, and the grain activations within them.

In the Grains section, you get an overview of the total grain activations in your silos, and a breakdown of each grain type. For each type of grain, you can see statistics on the number of activations, the rate of exceptions, throughput, and latency. In this case I haven’t actually created any grains, so all you can see are the system grains and the ones created by the Dashboard itself; however the data you see here will be a lot more interesting once you use it to monitor the activity and behaviour of your actual grains.

You can home in on an individual grain type. Aside from the statistics mentioned earlier, you also get detailed statistics on throughput, latency and failed requests per method.

At the bottom of the same page detailing a single type of grain, you can see a list of activations of that grain by silo.

Moving on, in the Silos section, you can see a summary of your active silos.

When you click on a silo, it gives you a more detailed view. At the top, you can see a graphical view showing resource utilisation: CPU, memory, and grains.

At the bottom, there are sections showing Silo Counters (number of clients, and messages sent and received), Silo Properties (information about the silo and its configuration), and a list of grain activations by type in the silo.

Adding the Dashboard to an Orleans silo

The Orleans Dashboard GitHub page explains how to set up and configure the dashboard. The first thing you need to do is install a NuGet package:

Install-Package OrleansDashboard

Then you will need to add an entry in the silo configuration to enable the dashboard. This can be done either using the XML configuration, or programmatically in code.

If you’re using a Dev/Test Host project to play with Orleans, it’s probably easier to do this in code. Find the file OrleansHostWrapper.cs, and after adding using OrleansDashboard; at the top, add the highlighted line below to register the dashboard:

            var config = ClusterConfiguration.LocalhostPrimarySilo();
            config.AddMemoryStorageProvider();
            config.Globals.RegisterDashboard();
            siloHost = new SiloHost(siloName, config);

If, on the other hand, you have a properly partitioned set of projects (as in “Getting Started with Microsoft Orleans“) and are using an OrleansConfiguration.xml file for your silo’s configuration, then just add an entry for the dashboard under the <Globals> node:

<?xml version="1.0" encoding="utf-8"?>
<OrleansConfiguration xmlns="urn:orleans">
  <Globals>
    <SeedNode Address="localhost" Port="11111" />
    <BootstrapProviders>
      <Provider Type="OrleansDashboard.Dashboard" Name="Dashboard" />
    </BootstrapProviders>
  </Globals>
  <Defaults>
    <Networking Address="localhost" Port="11111" />
    <ProxyingGateway Address="localhost" Port="30000" />
  </Defaults>
</OrleansConfiguration>

The dashboard runs by default at localhost:8080. If you want, you can change the port, or add basic username/password security. See the Orleans Dashboard page for an example showing how to configure these.

Summary

The Orleans dashboard is a great way to get detailed information about the silos in an Orleans cluster, and the grains within those silos. With detailed statistics like throughput, latency and failed requests, it is an invaluable tool to not only monitor the smooth operation of the cluster, but also to troubleshoot errors and performance bottlenecks.

Setting up the dashboard involves simply installing a NuGet package and then adding some very simple configuration to enable it. Additional configuration to change the port or add username/password protection is also possible.

The Orleans Dashboard homepage claims that:

“This project is alpha quality, and is published to collect community feedback.”

I’d argue that it’s pretty damn impressive for something that claims to be alpha quality.

See also: Orleans Virtual Meetup #11: A monitoring and visualisation show with Richard Astbury, Dan Vanderboom and Roger Creyke (13th October 2016).

Setting Up Elasticsearch and Kibana on Windows

Elasticsearch is fantastic to index your data so that it can be searched by its lightning-fast search engine. With Kibana, you also get the ability to analyse and visualise that data. Both of these products are provided for free by Elastic.

Installing Java Runtime Environment

Elastic products are developed in Java, so you’ll need the Java Runtime Environment (JRE) to run them. Get the latest JRE from the relevant ugly Oracle downloads page. Either use the .exe installer, or download the .zip file and then extract the folder inside.

Either way, take note of the JRE folder location and add it as an environment variable. To do this, hit the start menu and type “environment variables“:

In the window that comes up, go on Environment Variables…:

You will now see the user and system environment variables. Hit New… under the System variables:

Name it JAVA_HOME, and in the value put in the path to the JRE folder (not its bin folder):

You can now OK out of the various dialog windows.

Setting up Elasticsearch

Go to the Elasticsearch product page, and hit Download:

In the next page, download the ZIP file:

Extract the folder in the .zip file somewhere.

You can now run elasticsearch.bat. If you get “The syntax of the command is incorrect”, you probably didn’t set the JAVA_HOME environment variable as explained in the previous section.

elasticsearch.bat

Running this command, you should see a bunch of initialisation output:

…and if you browse to localhost:9200, you should see some JSON returned:

Now that we know it’s working, we can install it as a Windows service. So press Ctrl+C to kill the instance of Elasticsearch you just ran, and instead run:

elasticsearch-service.bat install

This should install it as a service:

This installs it as Manual startup type, and does not start it. You probably want to change that to Automatic (Delayed Start), from the Services window in Microsoft Windows, and also Start it. Once you have done that, give it a few seconds to start, and then verify again that you get a response from localhost:9200.

Setting up Kibana

Next, we will set up Kibana. Grab the Windows .zip file from the Kibana downloads page:

Extract it wherever your heart desires.

Make sure Elasticsearch is running. Then, in Kibana’s bin folder, run kibana.bat:

kibana.bat

Some text will be written to the console as Kibana is initialised, and then you should be able to go to localhost:5601 and actually get a webpage:

Now we know that it works. Let’s set it up as a service. Kill the instance we just ran using Ctrl+C first.

Oh crap, Kibana does not come with a service installer! What are we gonna do?

Enter NSSM, the Non-Sucking Service Manager, which we can use to install just about any application as a Windows service, using either the command line or an interactive GUI. After downloading NSSM, we can install Kibana as a Windows service with a command like the following from NSSM’s win64 folder:

nssm install "Kibana 5.2.2" C:\[...]\kibana-5.2.2-windows-x86\bin\kibana.bat

With an elevated command prompt, we can also configure the Windows service, such as setting the startup type and the description:

nssm set "Kibana 5.2.2" Start "SERVICE_DELAYED_AUTO_START"
nssm set "Kibana 5.2.2" Description "Kibana lets you visualize your Elasticsearch data"

Finally, we start the service:

nssm start "Kibana 5.2.2"

If all goes well:

…then we can go back to localhost:5601 and verify that it’s really running.

With that, it’s all set up. All that’s needed is an index with some data that you can use Kibana to visualise, but that’s beyond the scope of this article.

Morse Code Converter Using Dictionaries

This elementary article was originally posted as “C# Basics: Morse Code Converter Using Dictionaries” at Programmer’s Ranch, and was based on Visual Studio 2010. This updated version uses Visual Studio 2017; all screenshots as well as the intro and conclusion have been changed. The source code is now available at the Gigi Labs BitBucket repository.

Today, we’re going to write a little program that converts regular English characters and words into Morse Code, so each character will be represented by a series of dots and/or dashes. This article is mainly targeted at beginners and the goal is to show how dictionaries work.

We’ll start off by creating a console application. After going File -> New Project… in Visual Studio, select the Console App (.NET Framework) project type, and select a name and location for it. In Visual Studio 2017, you’ll find other options for console applications, such as Console App (.NET Core). While this simple tutorial should still work, we’re going to stick to the more traditional and familiar project type to avoid confusion.

In C#, we can use a dictionary to map keys (e.g. 'L') to values (e.g. ".-.."). In other programming languages, dictionaries are sometimes called hash tables or maps or associative arrays. The following is an example of a dictionary mapping the first two letters of the alphabet to their Morse equivalents:

            Dictionary<char, String> morse = new Dictionary<char, string>();
            morse.Add('A', ".-");
            morse.Add('B', "-...");

            Console.WriteLine(morse['A']);
            Console.WriteLine(morse['B']);

            Console.WriteLine("Press any key...");
            Console.ReadKey(false);

First, we are declaring a dictionary. A dictionary is a generic type, so we need to tell in the <> part which data types we are storing. In this case, we have a char key and a string value. We can then add various items, supplying the key and value to the Add() method. Finally, we get values just like we would access an array: using the [] syntax. Just that dictionaries aren’t restricted to using integers as keys; you can use any data type you like. Note: you’ll know from the earlier article, “The ASCII Table (C#)“, that a character can be directly converted to an integer. Dictionaries work just as well if you use other data types, such as strings.

Here is the output:

If you try to access a key that doesn’t exist, such as morse['C'], you’ll get a KeyNotFoundException. You can check whether a key exists using ContainsKey():

            if (morse.ContainsKey('C'))
                Console.WriteLine(morse['C']);

OK. Before we build our Morse converter, you should know that there are several ways of populating a dictionary. One is the Add() method we have seen above. Another is to assign values directly:

            morse['A'] = ".-";
            morse['B'] = "-...";

You can also use collection initialiser syntax to set several values at once:

            Dictionary<char, String> morse = new Dictionary<char, String>()
            {
                {'A' , ".-"},
                {'B' , "-..."}
            };

Since we only need to set the Morse mapping once, I’m going to use this method. Don’t forget the semicolon at the end! Replace your current code with the following:

            Dictionary<char, String> morse = new Dictionary<char, String>()
            {
                {'A' , ".-"},
                {'B' , "-..."},
                {'C' , "-.-."},
                {'D' , "-.."},
                {'E' , "."},
                {'F' , "..-."},
                {'G' , "--."},
                {'H' , "...."},
                {'I' , ".."},
                {'J' , ".---"},
                {'K' , "-.-"},
                {'L' , ".-.."},
                {'M' , "--"},
                {'N' , "-."},
                {'O' , "---"},
                {'P' , ".--."},
                {'Q' , "--.-"},
                {'R' , ".-."},
                {'S' , "..."},
                {'T' , "-"},
                {'U' , "..-"},
                {'V' , "...-"},
                {'W' , ".--"},
                {'X' , "-..-"},
                {'Y' , "-.--"},
                {'Z' , "--.."},
                {'0' , "-----"},
                {'1' , ".----"},
                {'2' , "..---"},
                {'3' , "...--"},
                {'4' , "....-"},
                {'5' , "....."},
                {'6' , "-...."},
                {'7' , "--..."},
                {'8' , "---.."},
                {'9' , "----."},
            };

           

            Console.WriteLine("Press any key...");
            Console.ReadKey(false);

In the empty space between the dictionary and the Console.WriteLine(), we can now accept user input and convert it to Morse:

            Console.WriteLine("Write something:");
            String input = Console.ReadLine();
            input = input.ToUpper();

            for (int i = 0; i < input.Length; i++)
            {
                if (i > 0)
                    Console.Write('/');

                char c = input[i];
                if (morse.ContainsKey(c))
                    Console.Write(morse[c]);
            }

            Console.WriteLine();

Here, the user writes something and it is stored in the input variable. We then convert this to uppercase because the keys in our dictionary are uppercase. Then we loop over each character in the input string, and write its Morse equivalent if it exists. We separate different characters in the Morse output by a forward slash (/). Here’s the output:

Awesome! 🙂 In this article we used Visual Studio to create a program that converts alphanumeric text into the Morse-encoded equivalent, while learning to use dictionaries in the process.

Which .NET Standard Version To Target

When I migrated Dandago.Finance to .NET Core yesterday, there was something I overlooked. I realised this when I tried to install the resulting package, targeting .NET Standard 1.6, in a new project. It worked fine in a .NET Core console application, but not in one targeting the full .NET Framework:

In fact, even referencing Dandago.Finance directly results in weird stuff going on:

The problem is immediately evident if we take a look at the compatibility grid for .NET Standard, a relevant excerpt of which at the time of writing this article is the following:

Targeting each version of .NET Standard means supporting the corresponding versions of .NET Core and .NET Framework upwards. For instance, if we target .NET Standard 1.4, then we support .NET Framework 4.6.1 and up, and .NET Core 1.0 and up.

But since Dandago.Finance was built to target .NET Standard 1.6, then .NET Framework 4.6.2 and earlier could not use it (since the first version it supports is “vNext”, whatever that means in this context).

So in practice, in order to maximise a library’s compatibility, you will want to target the lowest possible version of .NET Standard. You can do this by changing the target framework from the project settings:

In the case of Dandago.Finance, .NET Standard 1.1 provided insufficient API coverage to make it work:

Targeting .NET Standard 1.2 made Dandago.Finance compile just fine, and I verified that the resulting package installs fine for console applications targeting .NET Framework 4.5.1 and up (as per compatibility chart), and .NET Core 1.0 and up.

However, this means we have had to sacrifice support for .NET Framework 4.5. This is no big deal since .NET Framework versions 4, 4.5 and 4.5.1 have been dead for over a year now. So technically we could have targeted .NET Standard 1.3 (.NET Framework 4.6 and upwards), but it’s good to give extra backwards compatibility for legacy code where we can.

Migrating Dandago.Finance to .NET Core

Microsoft has recently been heavily investing in .NET Core, which you can think of as the next generation of the .NET Framework. There are various benefits to .NET Core, the biggest one being that it is cross-platform; thus compliant code can run on Windows, Linux and Mac (and probably others in future).

In this article, we’re going to take one of my smaller projects – Dandago.Finance – and port it to .NET Core. Dandago.Finance is ideal to demonstrate a first migration because it is very small, consisting of a main project (3 classes) and a unit test project (2 classes) – both class libraries.

Before we start, make sure you are using the latest tools (such as the recently released Visual Studio 2017). .NET Core tools have undergone a lot of radical changes (e.g. project.json is dead) so you don’t want to be learning based on something that’s already obsolete. If you’re using VS2017, make sure you have the .NET Core cross-platform development workload installed.

Migrating the main library

We’re going to start a fresh new class library targeting .NET Core and move our code there. Actually, that statement is not entirely correct: if you open Visual Studio 2017, you’ll see that there are at least 3 different kinds of class library you can create (or more depending on additional tooling you may have installed):

  • Class Library (.NET Framework)
  • Class Library (.NET Core)
  • Class Library (.NET Standard)

This is very confusing and I’ve asked a question about this on Stack Overflow yesterday that attracted some pretty detailed answers. In short, if you want your class libraries to be as portable as possible, you need to target .NET Standard. .NET Standard is a specification detailing APIs that need to be available in compatible frameworks. .NET Core, and certain versions of the full .NET Framework, implement .NET Standard. However, they each also incorporate a lot of other runtime-related stuff, so targeting .NET Core specifically means you can’t use your code under the full .NET Framework.

So let’s create a project of type Class Library (.NET Standard). As always, this will create a solution with the same name as the project.

Next, we’ll delete the automatically created Class1 class, and copy the class files from the old Dandago.Finance library to the new project folder. You’ll notice that Visual Studio automatically notices the new files and includes them in the project, without you needing to explicitly add them:

 

Migrating the test project

Let’s add a new class library for the unit tests, but this time it needs to be a Class Library (.NET Core). If you get this wrong and choose Class Library (.NET Standard) instead, Visual Studio won’t find your tests and the dotnet test command will refuse to run it (as per this Stack Overflow question). The reason why .NET Standard won’t work for unit tests is detailed in the corresponding answer: in short, we need to specify a target framework that will be responsible for running the tests; .NET Standard on its own is not enough.

Next, we need to add a reference to the Dandago.Finance project.

Now, we can repeat the procedure we did for the main library, and delete Class1.cs and copy over the test classes.

However, this isn’t going to be as smooth as with the main library. The original test project uses NUnit, and at the time of writing, that isn’t fully supported by .NET Core. Fortunately, however, it’s easy to change to xUnit, which does already boast .NET Core support.

First, we need to install the following packages:

Install-Package Microsoft.NET.Test.Sdk
Install-Package xunit
Install-Package xunit.runner.visualstudio

Then, we need to make the following substitutions:

  1. using NUnit.Framework; becomes using Xunit;
  2. [TestFixture] goes away
  3. [Test] becomes [Fact]
  4. Assert.IsTrue(...) becomes Assert.True(...)
  5. Assert.IsFalse(...) becomes Assert.False(...)

The solution should now build, and the unit tests should run successfully:

Summary

Migrating Dandago.Finance to .NET Core has taught us a few things:

  1. Visual Studio can automatically detect new files for .NET Core / .NET Standard projects.
  2. Portable class libraries should target .NET Standard.
  3. Unit test projects should target .NET Core.
  4. Use xUnit for .NET Core unit tests.