Category Archives: Software development

Reading RabbitMQ Settings Using .NET Core Configuration

The .NET Core Configuration system is extremely powerful and flexible. One of the features that I use the most is its capability to bind structured settings from a source (e.g. JSON file) to a C# object.

A very good example of this is obtaining RabbitMQ settings so that you can populate the ConnectionFactory. In the past, I’ve created a DTO (class) for this, and a parser that could populate this class based on a connection string format that I invented on the spot out of necessity. The good news is that you don’t have to do this any more. .NET Core configuration allows you to bind your config to an object, even if it’s coming from a third party library. Let’s see how.

Typical RabbitMQ Configuration

First, in order to use RabbitMQ, we need to install the RabbitMQ Client NuGet package.

Install-Package RabbitMQ.Client

Next, we’ll typically create a connection by means of the ConnectionFactory. We’ll need to populate the necessary fields, whether directly or by reading them from config. Technically, most of the settings below are not necessary because defaults are assumed if not provided, but we’ll include them anyway as we’re not assuming everyone is connecting to RabbitMQ on localhost.

            var connectionFactory = new ConnectionFactory()
            {
                HostName = "localhost",
                UserName = "guest",
                Password = "guest",
                VirtualHost = "/",
                AutomaticRecoveryEnabled = true,
                RequestedHeartbeat = 30
            };

If we use .NET Core configuration, we don’t even need to do this any more.

Connection Settings From JSON File

Let’s start by adding a new text file to the project called appsettings.json. From its properties, change it to copy to the output directory on build (Copy always and Copy if newer are both fine). In the file, we’ll add the JSON equivalent of what we have in ConnectionFactory above:

{
  "RabbitMqConnection": {
    "HostName": "localhost",
    "Username": "guest",
    "Password": "guest",
    "VirtualHost": "/",
    "AutomaticRecoveryEnabled": true,
    "RequestedHeartbeat": 30
  }
}

Now, we need a way to read this JSON file and bind it to a ConnectionFactory object. To do that, we need the following NuGet packages:

Install-Package Microsoft.Extensions.Configuration
Install-Package Microsoft.Extensions.Configuration.Json
Install-Package Microsoft.Extensions.Configuration.Binder

The .NET Core configuration system is split into multiple packages, so you can bring in only what you actually need. The first package is the heart of the framework, and you don’t need to install it directly because the second and third packages will both bring it in as a dependency when you install them. As for the Json and Binder packages, we’ll see what they do in a minute.

.NET Core configuration is loaded by means of a ConfigurationBuilder object. In our case, we’ll have:

            var config = new ConfigurationBuilder()
                .AddJsonFile("appsettings.json")
                .Build();

The AddJsonFile() extension method is provided by the Json package (the second one we installed earlier). The result of this is an object which implements IConfigurationRoot, and we can use this to read our settings.

Next, we’ll prepare an empty ConnectionFactory object that the binder will populate from the configuration in the next step.

var connectionFactory = new ConnectionFactory();

Finally, we can bind the entire “RabbitMqConnection” section of the appsettings.json file to our ConnectionFactory object, using the Bind() method (provided via the Binder package we installed earlier):

config.GetSection("RabbitMqConnection").Bind(connectionFactory);

If the key-value pairs in the JSON section match properties on the connectionFactory object, they will be set. You’ll know it worked because RequestedHeartbeat has a default value of 60, but it will be overridden by the value of 30 from appsettings.json.

Testing Connectivity

Now that you are populating the ConnectionFactory, you can connect in the same way as you used to before. This should suffice, as you’ll get an exception if your connection settings are incorrect:

            using (var conn = connectionFactory.CreateConnection())
            {
                Console.WriteLine("Connected!");

                // ...                

                Console.ReadLine();
            }

But if you want to make damn sure that you can actually interact with RabbitMQ, you can write a minimal consumer, and then send it messages via the Management Plugin’s Web UI:

            using (var conn = connectionFactory.CreateConnection())
            using (var channel = conn.CreateModel())
            {
                Console.WriteLine("Connected!");

                const string queueName = "madrid";
                channel.QueueDeclare(queueName, true, false, false, null);

                var consumer = new EventingBasicConsumer(channel);
                consumer.Received += (s, a) => Console.WriteLine("Message received!");
                channel.BasicConsume(queueName, true, consumer);

                Console.ReadLine();
            }

Wondering about the name of the queue? It’s because I found lots of them in Madrid. No kidding:

Summary

The point here was to show you how you can read settings from a section of a JSON file and have it directly deserialized into an object, using the binder feature of .NET Core configuration. The example here is specific to RabbitMQ, but you can use the same approach with any class you like, as long as the properties have public setters.

Also, remember that .NET Core configuration actually targets .NET Standard. That means you can use it not only with .NET Core apps, but also in the full .NET Framework, and any other compatible runtimes.

Indexing and Searching Geopolygons using ElasticSearch

ElasticSearch is great for indexing and searching text, but it also has a lot of functionality related to searching points and regions on the world map. In this article, we’ll learn how to index polygons corresponding to territories in the world, and find whether a point is in any indexed polygon.

Building Polygons with Geocoordinates

Back in school, we (hopefully) learned that a point in 2D space can be represented as an (x, y) pair of coordinates. A point in the world can similarly be identified by a (latitude, longitude) pair of geocoordinates. We can obtain geocoordinates for a location by clicking on the map in Google Maps or similar tools.

The analogy is not perfect though; geocoordinates are not linear, which is a result of the curvature of the Earth. This is not really important for us; the point is that we can represent any given point on the Earth’s surface by means of latitude and longitude.

Once we can identify points, it’s natural to extend the concept to 2D geometry. By taking several points, we can create polygons that mark the boundaries of a given territory, such as a country or state. Jeremy Hawes’ Google Maps Polygon Coordinates Tool is great for building such polygons.

Using this tool, we can very easily construct a rough polygon representing the state of Wyoming in the US. Wyoming is great to use as a simple example because it’s roughly rectangular, so we only need four points for a workable approximation.

Below the map in this polygon tool, you’ll get the coordinates of the points along with some extra JavaScript (which you could later paste directly into the code editor). In this case, we’ve got the following coordinates in (latitude, longitude) format:

45.01967,-104.04405
44.99904,-111.03084
41.011,-111.04131
41.00193,-104.03375

Once we have the points that make up the polygon, we can feed them into Elasticsearch.

Indexing Geopolygons in Elasticsearch

Before we can index anything, we need to create a mapping that defines the structure of an index, including any fields and their data types. The Mapping Geo Shapes page in the Elasticsearch documentation provides a starting point. However, the documentation is crap, and if you follow the example in the docs closely, you’ll get an error:

After a quick search, this Stack Overflow answer reveals the cause of the problem: Elasticsearch no longer likes the string data type, and expects you to use text instead. This wouldn’t have been a problem if they bothered to update their documentation once in a while. Anyhow, our mapping request for this example will be as follows:

PUT /regions
{
  "mappings": {
    "region": {
      "properties": {
        "name": {
          "type": "text"
        },
        "location": {
          "type": "geo_shape"
        }
      }
    }
  }
}

This essentially means that each region item in the regions index will have a name and a location, the latter being the polygon itself. While we will be focusing exclusively on polygons in this article, it is worth noting that the geo_shape data type supports a lot of other geometric constructs – refer to the Geo-Shape documentation for more information.

Once our mapping is in place, we can proceed to index our polygons. The Indexing Geo Shapes documentation page shows how to do this. There’s a catch though: Elasticsearch expects to receive coordinates in (longitude, latitude) format, which is is the reverse of what we’ve been using so far. We can use a simple regular expression (e.g. in Notepad++) to swap our coordinates:

(\-?\d+\.?\d*),(\-?\d+\.?\d*)
\2,\1

The first line shows the regular expression that is used to match coordinates, and the second like shows what it should be replaced by, i.e. swapped coordinates.

Let’s use the following query to try to index our Wyoming polygon:

PUT /regions/region/wyoming
{
    "name" : "Wyoming",
    "location" : {
        "type" : "polygon", 
        "coordinates" : [[ 
        [ -104.04405,45.01967 ],
        [ -111.03084,44.99904 ],
        [ -111.04131,41.011   ],
        [ -104.03375,41.00193 ]
        ]]
    }
}

This actually fails with an error:

This is because Elasticsearch expects the polygon to be closed, i.e. it must return to the starting point. Another thing to watch out for is any polygons that have self-intersections, which Elasticsearch doesn’t allow either.

We can fix our error by simply repeating the first coordinate at the end:

PUT /regions/region/wyoming
{
    "name" : "Wyoming",
    "location" : {
        "type" : "polygon", 
        "coordinates" : [[ 
        [ -104.04405,45.01967 ],
        [ -111.03084,44.99904 ],
        [ -111.04131,41.011   ],
        [ -104.03375,41.00193 ],
        [ -104.04405,45.01967 ]
        ]]
    }
}

It should work now:

Great! Our Wyoming polygon is now in Elasticsearch.

Querying Geopolygons in Elasticsearch

We can again turn to the Elasticsearch documentation for examples of how to query our geopolygon. We can do this by taking a circle with a given radius and seeing whether it intersects the polygon, as shown in Querying Geo Shapes. Don’t confuse this with the Geo Polygon Query documentation, which is actually the opposite of our situation (i.e. having a point in Elasticsearch, and providing the polygon to test against at query time).

To test this, we’ll pick a point somewhere in Wyoming. I used Google Maps to pick a point within Yellowstone National Park, which for all we know might just be where Yogi Bear lives:

Having obtained the coordinates, we can hit Elasticsearch with a query:

GET /regions/region/_search
{
  "query": {
    "geo_shape": {
      "location": { 
        "shape": { 
          "type":   "circle", 
          "radius": "25m",
          "coordinates": [ 
            -109.874838, 44.439550
          ]
        }
      }
    }
  }
}

And you’ll see that Wyoming is actually returned in the results:

You’ll also notice that Elasticsearch gave us back all the coordinate data which we don’t really care about in this case. This can be pretty inefficient if you’re using very large and detailed polygons. We can filter that out by specifying the _source property:

GET /regions/region/_search
{
  "_source": "name", 
  "query": {
    "geo_shape": {
      "location": { 
        "shape": { 
          "type":   "circle", 
          "radius": "25m",
          "coordinates": [ 
            -109.874838, 44.439550
          ]
        }
      }
    }
  }
}

The results are now nice and clean:

Next, we’ll take a point in Texas and see that we don’t get results for that:

Geopolygons with Holes

Some territories aren’t simple polygons; they contain other territories inside them, and so the polygon has a hole. Examples include:

  • Rome (Vatican City is a hole within it)
  • New South Wales (Australian Capital Territory is a hole within it)
  • South Africa (Lesotho is a hole within it)

The Indexing Geo Shapes documentation page (which we’ve referred to earlier) explains how to account for holes in polygons you index. Let’s see how this works using a practical example.

The above image shows what New South Wales, Australia looks like in Google Maps. Notice the Australian Capital Territory state inside it. Using Jeremy Hawes’ aforementioned polygon tool, we can draw a very rough polygon for New South Wales:

This gives us the following coordinates (lat, lon) for New South Wales:

-28.92704,141.04445
-33.97411,141.00841
-37.51381,149.94544
-34.98252,150.7789
-32.70393,152.18365
-28.24141,153.49901
-28.98426,148.87874 

We will also need a polygon for Australian Capital Territory. Again, this will be a really rough approximation just for the sake of example:

Our coordinates for Australian Capital Territory are:

-35.91185,149.05898
-35.36119,149.14473
-35.31932,149.40076
-35.11429,149.09984
-35.3126,148.80286
-35.71989,148.81557 

Next, we’ll index Australian Capital Territory. This is nothing new, but remember that we must take care to swap the coordinates so that become (lon, lat), and close the polygon by repeating the first coordinate pair at the end.

PUT /regions/region/act
{
    "name" : "Australian Capital Territory",
    "location" : {
        "type" : "polygon", 
        "coordinates" : [[ 
            [ 149.05898,-35.91185 ],
            [ 149.14473,-35.36119 ],
            [ 149.40076,-35.31932 ],
            [ 149.09984,-35.11429 ],
            [ 148.80286,-35.3126  ],
            [ 148.81557,-35.71989 ],
            [ 149.05898,-35.91185 ]
        ]]
    }
}

For New South Wales, we do something special: we give it two polygons.

PUT /regions/region/nsw
{
    "name" : "New South Wales",
    "location" : {
        "type" : "polygon", 
        "coordinates" : [
            [
                [ 141.04445,-28.92704 ],
                [ 141.00841,-33.97411 ],
                [ 149.94544,-37.51381 ],
                [ 150.7789, -34.98252 ],
                [ 152.18365,-32.70393 ],
                [ 153.49901,-28.24141 ],
                [ 148.87874,-28.98426 ],
                [ 141.04445,-28.92704 ]              
            ],
            [ 
                [ 149.05898,-35.91185 ],
                [ 149.14473,-35.36119 ],
                [ 149.40076,-35.31932 ],
                [ 149.09984,-35.11429 ],
                [ 148.80286,-35.3126  ],
                [ 148.81557,-35.71989 ],
                [ 149.05898,-35.91185 ]
            ]
        ]
    }
}

The first polygon is the New South Wales polygon. The second is the one for Australian Capital Territory. The way Elasticsearch interprets this is that the first polygon is the main one; all subsequent ones are holes in the main polygon.

Once this has also been indexed, we can test this. Remember to swap your coordinates – Google Maps uses (lat, lon) whereas Elasticsearch uses (lon, lat). Let’s take a point in New South Wales – somewhere in Sydney for instance:

Our point was correctly identified as being in New South Wales. Now, let’s take a point in Canberra so that we can test out Australian Capital Territory:

Elasticsearch correctly returned Australian Capital Territory in the results. What is even more significant is that it did not return New South Wales, which it would otherwise have done had we not specified the hole when we indexed it.

Summary

After a brief introduction to geocoordinates and geopolygons, we saw how we can index geopolygons in Elasticsearch and then run queries to find out in which polygon(s) a point belongs. In a slightly more advanced scenario, we saw how to deal with polygons that have holes.

Asynchronous RabbitMQ Consumers in .NET

It’s quite common to do some sort of I/O operation (e.g. REST call) whenever a message is consumed by a RabbitMQ client. This should be done asynchronously, but it’s not as simple as changing the event handling code to async void.

In “The Dangers of async void Event Handlers“, I explained how making an event handler async void will mess up the message order, because the dispatcher loop will not wait for a message to be fully processed before calling the handler on the next one.

While that article provided a workaround that is great to use with older versions of the RabbitMQ Client library, it turns out that there is an AsyncEventingBasicConsumer as from RabbitMQ.Client 5.0.0-pre3 which works great for asynchronous message consumption.

AsyncEventingBasicConsumer Example

First, we need to make sure that the RabbitMQ client library is installed.

Install-Package RabbitMQ.Client

Then, we can set up a publisher and consumer to show how to use the AsyncEventingBasicConsumer. Since this is just a demonstration, we can have both in the same process:

        static void Main(string[] args)
        {
            var factory = new ConnectionFactory() { DispatchConsumersAsync = true };
            const string queueName = "myqueue";

            using (var connection = factory.CreateConnection())
            using (var channel = connection.CreateModel())
            {
                channel.QueueDeclare(queueName, true, false, false, null);

                // consumer

                var consumer = new AsyncEventingBasicConsumer(channel);
                consumer.Received += Consumer_Received;
                channel.BasicConsume(queueName, true, consumer);

                // publisher

                var props = channel.CreateBasicProperties();
                int i = 0;

                while (true)
                {
                    var messageBody = Encoding.UTF8.GetBytes($"Message {++i}");
                    channel.BasicPublish("", queueName, props, messageBody);
                    Thread.Sleep(50);
                }
            }
        }

There is nothing really special about the above code except that we’re using AsyncEventingBasicConsumer instead of EventingBasicConsumer, and that the ConnectionFactory is now being set up with a suspicious-looking DispatchConsumersAsync property set to true. The ConnectionFactory is using defaults, so it will connect to localhost using the guest account.

The message handler is expected to return Task, and this makes it very easy to use proper asynchronous code:

        private static async Task Consumer_Received(object sender, BasicDeliverEventArgs @event)
        {
            var message = Encoding.UTF8.GetString(@event.Body);

            Console.WriteLine($"Begin processing {message}");

            await Task.Delay(250);

            Console.WriteLine($"End processing {message}");
        }

The messages are indeed processed in order:

How to Mess This Up

Remember that DispatchConsumersAsync property? I haven’t really found any documentation explaining what it actually does, but we can venture a guess after a couple of experiments.

First, let’s keep that property, but use a synchronous EventingBasicConsumer instead (which also means changing the event handler to have a void return type). When we run this, we get an error:

It says “In the async mode you have to use an async consumer”. Which I suppose is fair enough.

So now, let’s go back to using an AsyncEventingBasicConsumer, but leave out the DispatchConsumersAsync property:

var factory = new ConnectionFactory();

This time, you’ll see that the the event handler is not firing (nothing is being written to the console). The messages are indeed being published, and the queue is remaining at zero messages, so they are being consumed (you’ll see them accumulate if you disable the consumer).

This is actually quite dangerous, yet there is no error like the one we saw earlier. It means that if a developer forgets to set that DispatchConsumersAsync property, then all messages are lost. It’s also quite strange that the choice of how to dispatch messages to the consumer (i.e. sync or async) is a property of the connection rather than the consumer, although presumably it would be a result of some internal plumbing in the RabbitMQ Client library.

Summary

AsyncEventingBasicConsumer is great for having pure asynchronous RabbitMQ consumers, but don’t forget that DispatchConsumersAsync property.

It’s only available since RabbitMQ.Client 5.0.0-pre3, so if you’re on an older version, use the workaround described in “The Dangers of async void Event Handlers” instead.

Simple Ultima-Style Dialogue Engine in C#

The Ultima series is one of the most influential RPG series of all time. It is known for open worlds, intricate plots, ethical choices as opposed to “just kill the bad guy”, and… dialogue. The dialogue of the Ultima series went from being simple one-liners to complex dialogue trees with scripted side-effects.

Ultima 4-6, as well as the two Worlds of Ultima games (which used the Ultima 6 engine), used a simple keyword-based dialogue engine.

In these games, conversing with NPCs (people) involved typing in a number of common keywords such as “name” or “job”, and entering new keywords based on their responses in order to develop the conversation. Only the first four characters were taken into consideration, so “batt” and “battle” would yield the same result. “Bye” or an empty input ends the conversation, and any unrecognised keyword results in a fixed default response.

In Ultima 4, conversations were restricted to “name”, “job”, “health”, as well as two other NPC-specific keywords. For each NPC, one keyword would also trigger a question, to which you had to answer “yes” or “no”, and the NPC would respond differently based on your answer. You can view transcripts for or interact with almost all Ultima 4 dialogues on my oldest website, Dino’s Ultima Page, to get an idea how this works.

Later games improved this dialogue engine by highlighting keywords, adding more NPC-specific keywords, allowing multiple keywords to point to the same response, and causing side effects such as the NPC giving you an item.

If we focus on the core aspects of the dialogue engine, it is really simple to build something similar in just about any programming language you like. In C#, we could use a dictionary to hold the input keywords and the matching responses:

            var dialogue = new Dictionary<string, string>()
            {
                ["name"] = "My name is Tom.",
                ["job"] = "I chase Jerry.",
                ["heal"] = "I am hungry!",
                ["jerr"] = "Jerry the mouse!",
                ["hung"] = "I want to eat Jerry!",
                ["bye"] = "Goodbye!",
                ["default"] = "What do you mean?"
            };

We then loop until the conversation is over:

            string input = null;
            bool done = false;

            while (!done)
            {
                // the rest of the code goes here
            }

We accept input, and then process it to make it really easy to just index the dictionary later:

                Console.Write("You say: ");
                input = Console.ReadLine().Trim().ToLowerInvariant();
                if (input.Length > 4)
                    input = input.Substring(0, 4);

Whitespace around the input is trimmed off, and the input is converted to lowercase to match how we are storing the keywords in the dictionary’s keys. If the input is longer than 4 characters, we truncate it to the first four characters.

                if (input == string.Empty)
                    input = "bye";

                if (input == "bye")
                    done = true;

An empty input or “bye” will break out of the loop, ending the conversation.

                if (dialogue.ContainsKey(input))
                    Console.WriteLine(dialogue[input]);
                else
                    Console.WriteLine(dialogue["default"]);

The above code is the heart of the dialogue engine. It simply checks whether the input matches a known keyword. If it does, it returns the corresponding response. If not, it returns the “default” response. Note that this “default” response could not otherwise be obtained by normal means (for example, typing “default” as input) since the input is always being truncated to a maximum of four characters.

As you can see, it takes very little to add a really simple dialogue engine to your game. This might not have all the features that the Ultima games had, but serves as an illustration on how to get started.

The source code for this article is in the UltimaStyleDialogue folder at the Gigi Labs BitBucket repository.

Compressing Strings Using GZip in C#

Compressing data is a great way to reduce its size. This helps us reduce storage requirements as well as the bandwidth and latency of network transmissions.

There are many different compression algorithms, but here, we’ll focus on GZip. We will use the .NET Framework’s own GZipStream class (in the System.IO.Compression namespace), although it is also possible to use a third party library such as SharpZipLib. We’ll also focus explicitly on compressing and decompressing strings; the steps to deal with other types (such as byte arrays or streams) will be a little different.

Compressing Data with GZipStream

In its simplest form, GZipStream takes an underlying stream and a compression mode as parameters. The compression mode determines whether you want to compress or decompress; the underlying stream is manipulated according to that compression mode.

            string inputStr = "Hello world!";
            byte[] inputBytes = Encoding.UTF8.GetBytes(inputStr);

            using (var outputStream = new MemoryStream())
            {
                using (var gZipStream = new GZipStream(outputStream, CompressionMode.Compress))
                    gZipStream.Write(inputBytes, 0, inputBytes.Length);

                // TODO do something with the outputStream
            }

In the code above, we are using a memory stream as our underlying output stream. The GZipStream effectively wraps the output stream. When we write our input data into the GZipStream, it goes into the output stream as compressed data. By wrapping the write operation in a using block by itself, we ensure that the data is flushed.

Let’s add some code to take the bytes from the output stream and write them to the console window:

            string inputStr = "Hello world!";
            byte[] inputBytes = Encoding.UTF8.GetBytes(inputStr);

            using (var outputStream = new MemoryStream())
            {
                using (var gZipStream = new GZipStream(outputStream, CompressionMode.Compress))
                    gZipStream.Write(inputBytes, 0, inputBytes.Length);

                var outputBytes = outputStream.ToArray();

                var outputStr = Encoding.UTF8.GetString(outputBytes);
                Console.WriteLine(outputStr);

                Console.ReadLine();
            }

The output of this may be a little bit surprising:

The bytes resulting from the GZip compression are actually binary data. They are not intelligible when rendered, and may also cause problems when transmitted over a network (due to byte ordering, for instance). One way to deal with this is to encode the compressed bytes in base64:

            string inputStr = "Hello world!";
            byte[] inputBytes = Encoding.UTF8.GetBytes(inputStr);

            using (var outputStream = new MemoryStream())
            {
                using (var gZipStream = new GZipStream(outputStream, CompressionMode.Compress))
                    gZipStream.Write(inputBytes, 0, inputBytes.Length);

                var outputBytes = outputStream.ToArray();

                var outputbase64 = Convert.ToBase64String(outputBytes);
                Console.WriteLine(outputbase64);

                Console.ReadLine();
            }

Update 28th January 2018: As some people pointed out, it is not necessary to base64-encode compressed data, and it will transmit fine over a network even without it. However I do recall having issues transmitting binary compressed data via RabbitMQ, so you may want to apply base64 encoding as needed in order to render compressed data or work around issues.

Base64, however, is far from a compact representation. In this specific example, the length of the output string goes from 32 bytes (binary) to 44 (base64), reducing the effectiveness of compression. However, for larger strings, this still represents significant savings over the plain, uncompressed string.

Which brings us to the next question: why is our compressed data much larger than our uncompressed data (12 bytes)? While I don’t know how the GZip algorithm works internally, compression algorithms generally work best on larger data where there is a lot of repetition. On a very small string, the overhead required to represent the compressed format’s internal data structures dwarfs the data itself, negating benefits of compression. Thus, compression should typically be applied only to data whose length exceeds an arbitrary threshold.

Decompressing Data with GZipStream

When decompressing, the underlying stream is an input stream. The GZipStream still wraps it, but the flow is inverted so that when you read data from the GZipStream, it translates compressed data into uncompressed data.

The basic workflow looks something like this:

            string inputStr = "H4sIAAAAAAAAC/NIzcnJVyjPL8pJUQQAlRmFGwwAAAA=";
            byte[] inputBytes = Convert.FromBase64String(inputStr);

            using (var inputStream = new MemoryStream(inputBytes))
            using (var gZipStream = new GZipStream(inputStream, CompressionMode.Decompress))
            {
                // TODO read the gZipStream
            }

There are different ways to implement this, even if we just focus on decompressing from a string to a string. However, a low-level buffer read such as the following will not work:

The Length property is not supported in a GZipStream, so the above code gives a runtime error. We cannot use the length of the inputStream in its stead because it will generally not be the same (it does match for this “Hello World!” example, but it won’t if you try a longer string). Rather than read the entire length of the buffer, you could read block by block until you reach the end of the stream. But that’s more work than you need, and I’m lazy.

One way to get this working with very little effort is to introduce a third stream, and copy the GZipStream into it:

            string inputStr = "H4sIAAAAAAAAC/NIzcnJVyjPL8pJUQQAlRmFGwwAAAA=";
            byte[] inputBytes = Convert.FromBase64String(inputStr);

            using (var inputStream = new MemoryStream(inputBytes))
            using (var gZipStream = new GZipStream(inputStream, CompressionMode.Decompress))
            using (var outputStream = new MemoryStream())
            {
                gZipStream.CopyTo(outputStream);
                var outputBytes = outputStream.ToArray();

                string decompressed = Encoding.UTF8.GetString(outputBytes);

                Console.WriteLine(decompressed);
                Console.ReadLine();
            }

An even more concise approach is to use StreamReader:

            string inputStr = "H4sIAAAAAAAAC/NIzcnJVyjPL8pJUQQAlRmFGwwAAAA=";
            byte[] inputBytes = Convert.FromBase64String(inputStr);

            using (var inputStream = new MemoryStream(inputBytes))
            using (var gZipStream = new GZipStream(inputStream, CompressionMode.Decompress))
            using (var streamReader = new StreamReader(gZipStream))
            {
                var decompressed = streamReader.ReadToEnd();

                Console.WriteLine(decompressed);
                Console.ReadLine();
            }

…and without too much effort, we have our decompressed output:

Now again, your mileage may vary depending on what you’re doing. For instance, you might opt to use the asynchronous versions of stream manipulation methods if you’re dealing with streams that aren’t memory streams (e.g. a file). Or you might want to work exclusively with bytes rather than converting back to a string. In any case, hopefully the code in this article will give you a head start when you need to compress and decompress some data.

Abstracting RabbitMQ RPC with TaskCompletionSource

I recently wrote about TaskCompletionSource, a little-known tool in .NET that is great for transforming arbitrary asynchrony into the Task-Based Asynchronous Pattern. That means you can hide the whole thing behind a simple and elegant async/await.

In this article, we’ll see this in practice as we implement the Remote Procedure Call (RPC) pattern in RabbitMQ. This is a fancy way of saying request/response, except that it all happens asynchronously! That’s right. No blocking.

The source code for this article is in the RabbitMqRpc folder at the Gigi Labs BitBucket Repository.

The RabbitMQ.Client NuGet package is necessary to make this code work. The client is written using an asynchronous Main() method, which requires at least C# 7.1 to compile.

RabbitMQ RPC Overview

You can think of RPC as request/response communication. We have a client asking a server to process some input and return the output in its response. However, this all happens asynchronously. The client sends the request on a request queue and forgets about it, rather than waiting for the response. Eventually, the server will (hopefully) process the request and send a response message back on a response queue.

The request and response can be matched on the client side by attaching a CorellationId to both the request and the response.

In this context, we don’t really talk about publishers and consumers, as is typical when talking about messaging frameworks. That’s because in order to make this work, both the client and the server must have both a publisher and a consumer.

Client: Main Program

For our client application, we’ll have the following main program code. We will implement an RpcClient that will hide the request/response plumbing behind a simple Task that we then await:

        static async Task Main(string[] args)
        {
            Console.Title = "RabbitMQ RPC Client";

            using (var rpcClient = new RpcClient())
            {
                Console.WriteLine("Press ENTER or Ctrl+C to exit.");

                while (true)
                {
                    string message = null;

                    Console.Write("Enter a message to send: ");
                    using (var colour = new ScopedConsoleColour(ConsoleColor.Blue))
                        message = Console.ReadLine();

                    if (string.IsNullOrWhiteSpace(message))
                        break;
                    else
                    {
                        var response = await rpcClient.SendAsync(message);

                        Console.Write("Response was: ");
                        using (var colour = new ScopedConsoleColour(ConsoleColor.Green))
                            Console.WriteLine(response);
                    }
                }
            }
        }

The program continuously asks for input, and sends that input as the request message. The server will process this message and return a response. Note that we are using the ScopedConsoleColour class from my “Scope Bound Resource Management in C#” article to colour certain sections of the output. Here is a taste of what it will look like:

While this console application only allows us to send one request at a time, the underlying approach is really powerful with APIs that can concurrently serve a multitude of clients. It is asynchronous and can scale pretty well, yet the consuming code sees none of the underlying complexity.

Client: Request Sending

The heart of this abstraction is the RpcClient class. In the constructor, we set up the typical plumbing: create a connection, channel, queues, and a consumer.

    public class RpcClient : IDisposable
    {
        private bool disposed = false;
        private IConnection connection;
        private IModel channel;
        private EventingBasicConsumer consumer;
        private ConcurrentDictionary<string,
            TaskCompletionSource<string>> pendingMessages;

        private const string requestQueueName = "requestqueue";
        private const string responseQueueName = "responsequeue";
        private const string exchangeName = ""; // default exchange

        public RpcClient()
        {
            var factory = new ConnectionFactory() { HostName = "localhost" };

            this.connection = factory.CreateConnection();
            this.channel = connection.CreateModel();

            this.channel.QueueDeclare(requestQueueName, true, false, false, null);
            this.channel.QueueDeclare(responseQueueName, true, false, false, null);

            this.consumer = new EventingBasicConsumer(this.channel);
            this.consumer.Received += Consumer_Received;
            this.channel.BasicConsume(responseQueueName, true, consumer);

            this.pendingMessages = new ConcurrentDictionary<string,
                TaskCompletionSource<string>>();
        }

        // ...
    }

A few other things to notice here:

  1. We are keeping a dictionary that allow us to match responses with the requests that generated them, based on a CorrelationId. We have already seen this approach in “TaskCompletionSource by Example“.
  2. This class implements IDisposable, as it has several resources that need to be cleaned up. While I don’t show the code for this for brevity’s sake, you can find it in the source code.
  3. We are not using exchanges here, so using an empty string for the exchange name allows us to use the default exchange and publish directly to the queue.

The SendAsync() method, which we saw being used in the main program, is implemented as follows:

        public Task<string> SendAsync(string message)
        {
            var tcs = new TaskCompletionSource<string>();
            var correlationId = Guid.NewGuid().ToString();

            this.pendingMessages[correlationId] = tcs;

            this.Publish(message, correlationId);

            return tcs.Task;
        }

Here, we are generating GUID to use as a CorrelationId, and we are adding an entry in the dictionary for this request. This dictionary maps the CorrelationId to a corresponding TaskCompletionSource. When the response arrives, it will set the result on this TaskCompletionSource, which enables the underlying task to complete. We return this underlying task, and that’s what the main program awaits. The main program will not be able to continue until the response is received.

In this method, we are also calling a private Publish() method, which takes care of the details of publishing to the request queue on RabbitMQ:

        private void Publish(string message, string correlationId)
        {
            var props = this.channel.CreateBasicProperties();
            props.CorrelationId = correlationId;
            props.ReplyTo = responseQueueName;

            byte[] messageBytes = Encoding.UTF8.GetBytes(message);
            this.channel.BasicPublish(exchangeName, requestQueueName, props, messageBytes);

            using (var colour = new ScopedConsoleColour(ConsoleColor.Yellow))
                Console.WriteLine($"Sent: {message} with CorrelationId {correlationId}");
        }

While this publishing code is for the most part pretty standard, we are using two particular properties that are especially suited for the RPC pattern. The first is CorrelationId, where we store the CorrelationId we generated earlier, and which the server will copy and send back as part of the response, enabling this whole orchestration. The second is the ReplyTo property, which is used to indicate to the server on which queue it should send the response. We don’t need it for this simple example since we are always using the same response queue, but this property enables the server to dynamically route responses where they are needed.

Server

The request eventually reaches a server which has a consumer waiting on the request queue. Its Main() method is mostly plumbing that enables this consumer to work:

        private static IModel channel;

        static void Main(string[] args)
        {
            Console.Title = "RabbitMQ RPC Server";

            var factory = new ConnectionFactory() { HostName = "localhost" };

            using (var connection = factory.CreateConnection())
            {
                using (channel = connection.CreateModel())
                {
                    const string requestQueueName = "requestqueue";
                    channel.QueueDeclare(requestQueueName, true, false, false, null);

                    // consumer

                    var consumer = new EventingBasicConsumer(channel);
                    consumer.Received += Consumer_Received;
                    channel.BasicConsume(requestQueueName, true, consumer);

                    Console.WriteLine("Waiting for messages...");
                    Console.WriteLine("Press ENTER to exit.");
                    Console.WriteLine();
                    Console.ReadLine();
                }
            }
        }

When a message is received, the Consumer_Received event handler processes the message:

        private static void Consumer_Received(object sender, BasicDeliverEventArgs e)
        {
            var requestMessage = Encoding.UTF8.GetString(e.Body);
            var correlationId = e.BasicProperties.CorrelationId;
            string responseQueueName = e.BasicProperties.ReplyTo;

            Console.WriteLine($"Received: {requestMessage} with CorrelationId {correlationId}");

            var responseMessage = Reverse(requestMessage);
            Publish(responseMessage, correlationId, responseQueueName);
        }

In this example, the server’s job is to reverse whatever messages it receives. Thus, each response will contain the same message as in the corresponding request, but backwards. This reversal code is taken from this Stack Overflow answer. Although trivial to implement, this serves as a reminder that there’s no need to reinvent the wheel if somebody already implemented the same thing (and quite well, at that) before you.

        public static string Reverse(string s)
        {
            char[] charArray = s.ToCharArray();
            Array.Reverse(charArray);
            return new string(charArray);
        }

Having computed the reverse of the request message, and extracted both the CorrelationId and ReplyTo properties, these are all passed to the Publish() method which sends back the response:

        private static void Publish(string responseMessage, string correlationId,
            string responseQueueName)
        {
            byte[] responseMessageBytes = Encoding.UTF8.GetBytes(responseMessage);

            const string exchangeName = ""; // default exchange
            var responseProps = channel.CreateBasicProperties();
            responseProps.CorrelationId = correlationId;

            channel.BasicPublish(exchangeName, responseQueueName, responseProps, responseMessageBytes);

            Console.WriteLine($"Sent: {responseMessage} with CorrelationId {correlationId}");
            Console.WriteLine();
        }

The response is sent back on the queue specified in the ReplyTo property of the request message. The response is also given the same CorrelationId as the request; that way the client will know that this response is for that particular request.

Client: Response Handling

When the response arrives, the client’s own consumer event handler will run to process it:

        private void Consumer_Received(object sender, BasicDeliverEventArgs e)
        {
            var correlationId = e.BasicProperties.CorrelationId;
            var message = Encoding.UTF8.GetString(e.Body);

            using (var colour = new ScopedConsoleColour(ConsoleColor.Yellow))
                Console.WriteLine($"Received: {message} with CorrelationId {correlationId}");

            this.pendingMessages.TryRemove(correlationId, out var tcs);
            if (tcs != null)
                tcs.SetResult(message);
        }

The client extracts the CorrelationId from the response, and uses it to get the TaskCompletionSource for the corresponding request. If the TaskCompletionSource is found, then its result is set to the content of the response. This causes the underlying task to complete, and thus the caller awaiting that task will be able to resume and work with the result.

If the TaskCompletionSource is not found, then we ignore the response, and there is a reason for this:

“You may ask, why should we ignore unknown messages in the callback queue, rather than failing with an error? It’s due to a possibility of a race condition on the server side. Although unlikely, it is possible that the RPC server will die just after sending us the answer, but before sending an acknowledgment message for the request. If that happens, the restarted RPC server will process the request again. That’s why on the client we must handle the duplicate responses gracefully, and the RPC should ideally be idempotent.” — RabbitMQ RPC tutorial

Demo

If we run both the client and server, we can enter messages in the client, one by one. The client publishes each message on the request queue and waits for the response, at which point it allows the main program to continue by setting the result of that request’s TaskCompletionSource.

Summary

What we have seen in this article is the same material I had explained in “TaskCompletionSource by Example“, but with a real application to RabbitMQ.

A TaskCompletionSource has an underlying Task that can represent a pending request. By giving each request an ID, you can keep track of it as the corresponding response should carry the same ID. A mapping between request IDs and TaskCompletionSource can easily be kept in a dictionary. When a response arrives, its corresponding entry in the dictionary can be found, and the Task can be completed. Any client code awaiting this Task may then resume.

SignalR Core: Hello World

SignalR is a library that brought push notifications to ASP .NET web applications. It abstracted away the complexity of dealing with websockets and other front-end technologies necessary for a web application to spontaneously push out updates to client applications, and provided an easy programming model.

Essentially, SignalR allows us to implement publish/subscribe on the server. Clients, which are typically (but not necessarily) webpages, subscribe to a hub, which can then push updates to them. These updates can be sent spontaneously by the server (e.g. stock ticker) or triggered by a message from a client (e.g. chat).

The old SignalR, however, is not compatible with ASP .NET Core. So if you wanted to have push notifications in your web application, you had to look elsewhere… until recently. Microsoft shipped their first alpha release of SignalR Core (SignalR for ASP .NET Core 2.0) a few weeks ago, and the second alpha was released just yesterday. They also have some really nice samples we can learn from.

This article explains how to quickly get started with SignalR Core, by means of a simple Hello World application that combines a simple server-side hub with a trivial JavaScript client. It is essentially the first example from my “Getting Started with SignalR“, ported to SignalR Core.

The source code for this article is in the SignalRCoreHello folder at the Gigi Labs BitBucket Repository.

Hello SignalR Core: Server Side

This example is based on SignalR Core alpha 2, and uses ASP .NET Core 2 targeting .NET Core 2. As this is pre-release software, APIs may change.

Let’s start off by creating a new ASP .NET Core Web Application in Visual Studio 2017. We can start off simple by using the Empty project template:

This project template should come with a reference to the Microsoft.AspNet.All NuGet package, giving you most of what you need to create our web application.

In addition to that, we’ll need to install the NuGet package for SignalR. Note that we need the -Pre switch for now because it is still prerelease.

Install-Package Microsoft.AspNetCore.SignalR -Pre

Next, let’s add a Hub. Just add a new class:

    public class HelloHub : Hub
    {
        public Task BroadcastHello()
        {
            return Clients.All.InvokeAsync("hello");
        }
    }

In SignalR Core, a class that inherits from Hub is able to communicate with any clients that are subscribed to it. This can be done in several ways: broadcast to all clients or all except one; send to a single client; or send to a specific group. In this case, we’re simply broadcasting a “hello” message to all clients.

In the Startup class, we need to remove the default “Hello world” code and register our Hub instead. It should look something like this:

    public class Startup
    {
        // This method gets called by the runtime. Use this method to add services to the container.
        // For more information on how to configure your application, visit https://go.microsoft.com/fwlink/?LinkID=398940
        public void ConfigureServices(IServiceCollection services)
        {
            services.AddSignalR();
        }

        // This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
        public void Configure(IApplicationBuilder app, IHostingEnvironment env)
        {
            if (env.IsDevelopment())
            {
                app.UseDeveloperExceptionPage();
            }

            app.UseFileServer();

            app.UseSignalR(routes =>
            {
                routes.MapHub<HelloHub>("hello");
            });
        }
    }

UseSignalR() is where we register the route by which our Hub will be accessed from the client side. UseFileServer() is there just to serve the upcoming HTML and JavaScript.

Hello SignalR Core: Client Side

In order to have a webpage that talks to our Hub, we first need a couple of scripts. We’ll get these using npm, which you can obtain by installing Node.js if you don’t have it already.

npm install @aspnet/signalr-client
npm install jquery

The first package is the client JavaScript for SignalR Core. At the time of writing this article, the file you need is called signalr-client-1.0.0-alpha2-final.js. The second package is jQuery, which is no longer required by SignalR Core, but will make life easier for our front-end code. Copy both signalr-client-1.0.0-alpha2-final.js and jquery.js into the wwwroot folder.

Next, add an index.html file in the wwwroot folder. Add references to the aforementioned scripts, a placeholder for messages (with ID “log” in this example), and a little script to wire things up:

<!DOCTYPE html>
<html>
<head>
    <meta charset="utf-8" />
    <title>Hello SignalR Core!</title>
    <script src="jquery.js"></script>
    <script src="signalr-client-1.0.0-alpha2-final.js"></script>
    <script type="text/javascript">
            $(document).ready(function () {
                var connection = new signalR.HubConnection('/hello');

                connection.on('hello', data => {
                    $("#log").append("Hello <br />");
                });

                connection.start()
                    .then(() => connection.invoke('BroadcastHello'));
            });
    </script>
</head>
<body>
    <div id="log"></div>
</body>
</html>

This JavaScript establishes a connection to the hub, registers a callback for when a “hello” message is received, and calls the BroadcastHello() method on the hub:

The way we implemented our Hub earlier, it will send a “hello” message to all connected clients.

Let’s give that a try now:

Good! The connection is established, and we’re getting something back from the server (i.e. the Hub). Let’s open a couple more browser windows at the same endpoint:

Here, we can see that each time a new window was opened, a new “hello” message was broadcasted to all connected clients. Since we are not holding any state, messages are sent incrementally, so newer clients that missed earlier messages will be showing fewer messages.

The Chat Sample

If you want to see a more elaborate example, check out the Chat sample from the official SignalR Core samples:

The principle is the same, but the Chat sample is a little more interesting.

Adding Swagger to an ASP .NET Core 2 Web API

If you develop REST APIs, then you have probably heard of Swagger before.

Swagger is a tool that automatically documents your Web API, and provides the means to easily interact with it via an auto-generated UI.

In this article, we’ll see how to add Swagger to an ASP .NET Core Web API. We’ll be using the .NET Core 2 SDK, so if you’re using an earlier version, your mileage may vary. We’re only covering basic setup, so check out ASP.NET Web API Help Pages using Swagger in the Microsoft documentation if you want to go beyond that.

Project Setup

To make things easy, we’ll use one of the templates straight out of Visual Studio to get started. Go to File -> New -> Project… and select the ASP .NET Core Web Application template:

Next, pick the Web API template. You may want to change the second dropdown to ASP .NET Core 2.0, as it is currently not set to that by default.

Adding Swagger

You should now have a simple Web API project template with a ValuesController, providing an easy way to play with Web API out of the box.

To add Swagger, we first need to add a package:

Install-Package Swashbuckle.AspNetCore

Then, we throw in some configuration in Startup.cs to make Swagger work. Replace “My API” with whatever your API is called.

        // This method gets called by the runtime. Use this method to add services to the container.
        public void ConfigureServices(IServiceCollection services)
        {
            services.AddMvc();

            services.AddSwaggerGen(c =>
            {
                c.SwaggerDoc("v1", new Info { Title = "My API", Version = "v1" });
            });
        }

        // This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
        public void Configure(IApplicationBuilder app, IHostingEnvironment env)
        {
            if (env.IsDevelopment())
            {
                app.UseDeveloperExceptionPage();
            }

            app.UseSwagger();

            app.UseSwaggerUI(c =>
            {
                c.SwaggerEndpoint("/swagger/v1/swagger.json", "My API V1");
            });

            app.UseMvc();
        }

Note: if you’re starting with a more minimal project template, it is possible that you may need to install the Microsoft.AspNetCore.StaticFiles package for this to work. This package is already present in the project template we’re using above.

Accessing Swagger

Let’s now run the web application. We should see a basic JSON response from the ValuesController:

If we now change the URL to http://localhost:<port>/swagger/, then we get to the Swagger UI:

Here, we can see a list of all our controllers and their actions. We can also open them up to interact with them.

Summary

That’s all it takes to add Swagger to your ASP .NET Core Web API.

  1. Add the Swashbuckle.AspNetCore package.
  2. Configure Swagger in the startup class.
  3. Access Swagger from http://localhost:<port>/swagger/.

TaskCompletionSource by Example

In this article, we’ll learn how to use TaskCompletionSource. It’s one of those tools which you will rarely need to use, but when you do, you’ll be glad that you knew about it. Let’s dive right into it.

Basic Usage

The source code for this section is in the TaskCompletionSource1 folder at the Gigi Labs BitBucket Repository.

Let’s create a new console application, and in Main(), we’ll have my usual workaround for running asynchronous code in a console application:

        static void Main(string[] args)
        {
            Run();
            Console.ReadLine();
        }

In the Run() method, we have a simple example showing how TaskCompletionSource works:

        static async void Run()
        {
            var tcs = new TaskCompletionSource<bool>();

            var fireAndForgetTask = Task.Delay(5000)
                                        .ContinueWith(task => tcs.SetResult(true));

            await tcs.Task;
        }

TaskCompletionSource is just a wrapper for a Task, giving you control over its completion. Thus, a TaskCompletionSource<bool> will contain a Task<bool>, and you can set the bool result based on your own logic.

Here, we are using TaskCompletionSource as a synchronization mechanism. Our main thread spawns off an operation and waits for its result, using the Task in the TaskCompletionSource. Even if the operation is not Task-based, it can set the result of the Task in the TaskCompletionSource, allowing the main thread to resume its execution.

Let’s add some diagnostic code so that we can understand what’s going on from the output:

        static async void Run()
        {
            var stopwatch = Stopwatch.StartNew();

            var tcs = new TaskCompletionSource<bool>();

            Console.WriteLine($"Starting... (after {stopwatch.ElapsedMilliseconds}ms)");

            var fireAndForgetTask = Task.Delay(5000)
                                        .ContinueWith(task => tcs.SetResult(true));

            Console.WriteLine($"Waiting...  (after {stopwatch.ElapsedMilliseconds}ms)");

            await tcs.Task;

            Console.WriteLine($"Done.       (after {stopwatch.ElapsedMilliseconds}ms)");

            stopwatch.Stop();
        }

And here is the output:

Starting... (after 0ms)
Waiting...  (after 41ms)
Done.       (after 5072ms)

As you can see, the main thread waited until tcs.SetResult(true) was called; this triggered completion of the TaskCompletionSource’s underlying task (which the main thread was awaiting), and allowed the main thread to resume.

Aside from SetResult(), TaskCompletionSource offers methods to cancel a task or fault it with an exception. There are also safe Try...() equivalents:

SDK Example

The source code for this section is in the TaskCompletionSource2 folder at the Gigi Labs BitBucket Repository.

One scenario where I found TaskCompletionSource to be extremely well-suited is when you are given a third-party SDK which exposes events. Imagine this: you submit an order via an SDK method, and it gives you an ID for that order, but not the result. The SDK goes off and does what it has to do to perhaps talk to an external service and have the order processed. When this eventually happens, the SDK fires an event to notify the calling application on whether the order was placed successfully.

We’ll use this as an example of the SDK code:

    public class MockSdk
    {
        public event EventHandler<OrderOutcome> OnOrderCompleted;

        public Guid SubmitOrder(decimal price)
        {
            var orderId = Guid.NewGuid();

            // do a REST call over the network or something
            Task.Delay(3000).ContinueWith(task => OnOrderCompleted(this,
                new OrderOutcome(orderId, true)));

            return orderId;
        }
    }

The OrderOutcome class is just a simple DTO:

    public class OrderOutcome
    {
        public Guid OrderId { get; set; }
        public bool Success { get; set; }

        public OrderOutcome(Guid orderId, bool success)
        {
            this.OrderId = orderId;
            this.Success = success;
        }
    }

Notice how MockSdk‘s SubmitOrder does not return any form of Task, and we can’t await it. This doesn’t necessarily mean that it’s blocking; it might be using another form of asynchrony such as the Asynchronous Programming Model or a messaging framework with a request-response fashion (such as RPC over RabbitMQ).

At the end of the day, this is still asynchrony, and we can use TaskCompletionSource to build a Task-based Asynchronous Pattern abstraction over it (allowing the application to simply await the call).

First, we start building a simple proxy class that wraps the SDK:

    public class SdkProxy
    {
        private MockSdk sdk;

        public SdkProxy()
        {
            this.sdk = new MockSdk();
            this.sdk.OnOrderCompleted += Sdk_OnOrderCompleted;
        }

        private void Sdk_OnOrderCompleted(object sender, OrderOutcome e)
        {
            // TODO
        }
    }

We then add a dictionary, which allows us to relate each OrderId to its corresponding TaskCompletionSource. Using a ConcurrentDictionary instead of a normal Dictionary helps deal with multithreading scenarios without needing to lock:

        private ConcurrentDictionary<Guid,
            TaskCompletionSource<bool>> pendingOrders;
        private MockSdk sdk;

        public SdkProxy()
        {
            this.pendingOrders = new ConcurrentDictionary<Guid,
                TaskCompletionSource<bool>>();

            this.sdk = new MockSdk();
            this.sdk.OnOrderCompleted += Sdk_OnOrderCompleted;
        }

The proxy class exposes a SubmitOrderAsync() method:

        public Task SubmitOrderAsync(decimal price)
        {
            var orderId = sdk.SubmitOrder(price);

            Console.WriteLine($"OrderId {orderId} submitted with price {price}");

            var tcs = new TaskCompletionSource<bool>();
            this.pendingOrders.TryAdd(orderId, tcs);

            return tcs.Task;
        }

This method calls the underlying SubmitOrder() in the SDK, and uses the returned OrderId to add a new TaskCompletionSource in the dictionary. The TaskCompletionSource’s underlying Task is returned, so that the application can await it.

        private void Sdk_OnOrderCompleted(object sender, OrderOutcome e)
        {
            string successStr = e.Success ? "was successful" : "failed";
            Console.WriteLine($"OrderId {e.OrderId} {successStr}");

            this.pendingOrders.TryRemove(e.OrderId, out var tcs);
            tcs.SetResult(e.Success);
        }

When the SDK fires a completion event, the proxy will remove the TaskCompletionSource from the pending order and set its result. The application awaiting the underlying task will resume and take a decision depending on the logic.

We can test this with the following program code in a console application:

        static void Main(string[] args)
        {
            Run();
            Console.ReadLine();
        }

        static async void Run()
        {
            var sdkProxy = new SdkProxy();

            await sdkProxy.SubmitOrderAsync(10);
            await sdkProxy.SubmitOrderAsync(20);
            await sdkProxy.SubmitOrderAsync(5);
            await sdkProxy.SubmitOrderAsync(15);
            await sdkProxy.SubmitOrderAsync(4);
        }

The output shows that the program did indeed wait for each order to complete before starting the next one:

OrderId 3e2d4577-8bbb-46b7-a5df-2efec23bae6b submitted with price 10
OrderId 3e2d4577-8bbb-46b7-a5df-2efec23bae6b was successful
OrderId e22425b9-3aa3-48db-a40f-8b8cfbdcd3af submitted with price 20
OrderId e22425b9-3aa3-48db-a40f-8b8cfbdcd3af was successful
OrderId 3b5a2602-a5d2-4225-bbdb-10642a63f7bc submitted with price 5
OrderId 3b5a2602-a5d2-4225-bbdb-10642a63f7bc was successful
OrderId ffd61cea-343e-4a9c-a76f-889598a45993 submitted with price 15
OrderId ffd61cea-343e-4a9c-a76f-889598a45993 was successful
OrderId b443462c-f949-49b9-a6f0-08bbbb82fe7e submitted with price 4
OrderId b443462c-f949-49b9-a6f0-08bbbb82fe7e was successful

Summary

Use TaskCompletionSource to adapt an arbitrary form of asynchrony to use Tasks, and enable elegant async/await usage.

Do not use it simply expose an asynchronous wrapper for a synchronous method. You should either not do that at all, or use Task.FromResult() instead.

If you’re concerned that the the asynchronous call might never resume, consider adding a timeout.

Common Mistakes in Asynchronous Programming with .NET

In the last few articles, we have seen how to work with asynchronous programming in C#. Although it is now easier than ever to write responsive applications that do asynchronous, non-blocking I/O operations, many people still use asynchronous programming incorrectly. A lot of this is due to confusion over usage of the Task class in .NET, which is used in multithreaded and parallel scenarios as well as asynchronous ones. To make matters worse, it is not obvious to everyone that these are actually different things.

So let’s address this concern first.

Update 16th October 2017: Several people have pointed out errors in this article with regards to the effect that blocking has on the CPU. I apologise for this, and have made corrections. Blocking does not hog the CPU, but prevents threads from doing other work while they wait. Thanks for clarifying the confusion, and I welcome any further corrections.

Asynchronous vs Multithreading etc

When we use a computer, many programs are running at the same time. Before the advent of multicore CPUs, this was achieved by having instructions from different processes (and eventually threads) running one at a time on the same CPU. These instructions are interleaved, and the CPU switches rapidly from one to another (in what we call a context switch), giving the illusion that they are running at the same time. This is concurrency.

CPUs with multiple cores have the additional ability of literally executing multiple intructions at the same time, which is parallel execution. Multithreading gives the developers to make full use of the available cores; without it, instructions from a single process would only be able to execute on a single core at a time.

“Tasks which are executing on distinct processors at any point in time are said to be running in parallel. It may also be possible to execute several tasks on a single processor. Over a period of time, the impression is given that they are running in parallel, when in fact, at any point in time, only one task has control of the processor. In this case, we say that the tasks are being performed concurrently, that is, their execution is being shared by the same processor.” — Practical Parallel Rendering, Chalmers et al, A K Peters, 2002.

In .NET, we can do parallel processing by using multithreading, or an abstraction thereof, such as the Task Parallel Library. Parallel processing is CPU-bound.

Parallel processing is often contrasted with distributed processing, where the computing resources are not physically tightly coupled. This is not, however, relevant to asynchronous programming, so we will not delve into it.

Operations that take a long time to execute will typically hold control of the thread in which they are running, in what we call a blocking operation. However, if these operations involve waiting for I/O to occur (e.g. waiting for results from a file, network or database), then the I/O could occur in a non-blocking fashion without holding the thread at all during the waiting time. We say that the I/O operation is asynchronous: the thread that is waiting for it does not actually wait, but may be reassigned to do other work until the I/O operation is complete.

Asynchronous non-blocking I/O is not enabled by multithreading. In fact, Stephen Cleary goes into detail about how this works in his excellent post, “There Is No Thread“. In brief, a mechanism known as I/O Completion Ports (IOCP) is used to notify a thread that its I/O request is ready; but that thread does not need to block (or indeed run at all) during the waiting time. This is what we enable when we do an asynchronous wait by means of the await keyword.

In order to write efficient code, it is fundamental to understand the nature of what the code is doing. Parallel CPU-based execution involves significant overheads in thread synchronization. It makes no sense to use Parallel.ForEach() for I/O-bound tasks, and many are also surprised to find that executing CPU-based tasks sequentially is often faster than doing them in parallel, especially when such tasks are fine-grained and do very trivial work. In such cases, the synchronization overheads dwarf the cost of executing that code directly on the CPU on a single thread.

See also: “Asynchronous and Concurrent Processing in Akka .NET Actors“, which has a section on Asynchronous vs Concurrent Processing using simple tasks.

The Dangers of Blocking with Asynchronous Code

Asynchronous programming has two main benefits: scalability and offloading. If you block, then you are hogging resources (i.e. threads) that could be better used elsewhere. In an ASP .NET context, this means that a thread cannot service other requests (hurts scalability). In a GUI context, it means that the UI thread cannot be used for rendering because it is busy waiting for a long-running operation (so the work should be offloaded).

There are several methods which are part of the Task API which block, such as Wait(), WaitAny() and WaitAll(). The Result property also has the effect of blocking until the task is complete. Stephen Cleary has a table in his Async and Await intro showing these blocking API calls and how to turn them into asynchronous calls. Best practice is to await the asynchronous equivalent of the blocking method

However, simply wasting threading resources is not the only problem with blocking. It is actually very easy to end up with a deadlock and stall your application. Stephen Cleary has an excellent explanation of how this happens, with two concise examples based on GUI applications (e.g. WPF or Windows Forms) and ASP .NET. I am only going to attempt to simplify the scenario and illustrate it with a diagram.

Consider the following code in a WPF application’s codebehind (MainWindow.xaml.cs):

        private Uri uri = new Uri("http://ip.jsontest.com/");

        public async Task WaitABit()
        {
            await Task.Delay(3000);
        }

        private void Button_Click(object sender, RoutedEventArgs e)
        {
            var task = WaitABit();
            task.Wait();
        }

This results in deadlock. Let’s see why:

  1. Button_Click() calls WaitABit() without awaiting it. This starts a task in fire-and-forget mode (so far).
  2. WaitABit() calls Task.Delay() and awaits its result asynchronously. The thread does not block, since the execution has gone into “I/O mode” (technically a delay isn’t really I/O, but the idea is the same).
  3. In the meantime, Button_Click() resumes execution, and calls Wait(), effectively blocking until the result of WaitABit() is ready.
  4. The delay completes.
  5. WaitABit() should resume, but the UI thread that it’s supposed to run on is already blocked.
  6. The deadlock occurs because the continuation of WaitABit() needs to run on the UI thread, but the UI thread is blocked waiting for the result of WaitABit().

Note: I intentionally haven’t simplified WaitABit() such that it just returns the delay rather than doing a single-line async/await, as it would not deadlock. Can you guess why?

This example has shown blocking of the UI thread, but the concept stretches beyond GUI applications. In order to fully understand what is happening, we need to understand what a SynchronizationContext is. In short, it’s an abstraction of a threading model within an application. A GUI application needs a single UI thread for rendering and updating GUI components (although you can use other threads, they cannot touch the GUI directly). An ASP .NET application, on the other hand, handles requests using the thread pool. The SynchronizationContext is the abstraction that allows us to use the same multithreaded and asynchronous programming models across applications with fundamentally different internal threading models.

As a result, GUI applications (e.g. Windows Forms and WPF) and ASP .NET applications (those targeting the full .NET Framework) can deadlock. ASP .NET Core applications don’t have a SynchronizationContext, so they will not deadlock. Console applications won’t normally deadlock because the task continuation can just execute on another thread.

The deadlock occurs because the SynchronizationContext (e.g. the UI thread in the above example) is captured and used for the task continuation. However, we can prevent this from happening by using ConfigureAwait(false):

        public async Task WaitABit()
        {
            await Task.Delay(3000).ConfigureAwait(false);
        }

The GUI application does not deadlock now, because a thread pool can be picked to execute the continuation. Once WaitABit() completes, then the blocking Wait() in Button_Click() can resume on the same UI thread where it started. Any modifications to UI elements would work fine.

While ConfigureAwait(false) is no replacement for doing async/await all the way, it does have its benefits. Capturing the SynchronizationContext incurs a performance penalty, so if it is not actually necessary, library code should avoid it by using ConfigureAwait(false). Also, if application code must block on an asynchronous call for legacy reasons, then ConfigureAwait(false) would avoid the resulting deadlocks.

Of course, the real fix for our deadlock example here is really as easy as this (without ConfigureAwait(false)):

        private async void Button_Click(object sender, RoutedEventArgs e)
        {
            var task = WaitABit();
            await task;
        }

I’ve just replaced the blocking Wait() call with an await, and marked the method as async as a result.

Asynchronous Wrappers for Synchronous Methods

A lot of library APIs with asynchronous methods have pairs of synchronous and asynchronous methods, e.g. Read() and ReadAsync(), Write() and WriteAsync(), etc. If your library API is purely synchronous, then you should not expose asynchronous wrappers for the synchronous methods (e.g. by using Task.Run()).

Stephen Toub goes into detail on why this is a bad idea in the article linked above, but I think the following paragraph summarises it best:

“If a developer needs to achieve better scalability, they can use any async APIs exposed, and they don’t have to pay additional overhead for invoking a faux async API. If a developer needs to achieve responsiveness or parallelism with synchronous APIs, they can simply wrap the invocation with a method like Task.Run.” — Stephen Toub, “Should I expose asynchronous wrappers for synchronous methods?

Asynchronous Properties

You can’t use async/await in properties. You can use them indirectly via an asynchronous method, but it’s a rather weird thing to do.

“[You can’t use await i]nside of a property getter or setter. Properties are meant to return to the caller quickly, and thus are not expected to need asynchrony, which is geared for potentially long-running operations. If you must use asynchrony in your properties, you can do so by implementing an async method which is then used from your property.” — Stephen Toub, Async/Await FAQ

async void methods

async void methods should only be used for top-level event handlers. In “The Dangers of async void Event Handlers“, I explain the general dangers of async void methods (mainly related to not having a task that you can await), but I also demonstrate and solve the additional problem of async void event handlers interleaving while they run (which is problematic if you’re expecting to handle events in sequence, such as with a message queue).

    1. “There is no way for the caller to await completion of the method.
    2. “As a result of this, async void calls are fire-and-forget.
    3. “Thus it follows that async void methods (including event handlers) will execute in parallel if called in a loop.
    4. “Exceptions can cause the application to crash (see the aforementioned article by Stephen Cleary for more on this).”

— Daniel D’Agostino, The Dangers of async void Event Handlers

Summary

  1. Parallel is CPU-based. Asynchronous is I/O-based. Don’t mix the two. (Running asynchronous I/O tasks “in parallel” is OK, as long as you’re doing it following the proper patterns rather than using something like Parallel.Foreach().)
  2. Asynchronous I/O does not use any threads.
  3. Blocking affects scalability and can hold higher-priority resources (such as a UI thread).
  4. Blocking can also result in deadlocks. Prevent them by using async/await all the way if you can. ConfigureAwait(false) is useful in library code both for performance reasons and to prevent deadlocks resulting from application code that must block for legacy reasons.
  5. For asynchronous libraries, don’t expose synchronous wrappers. There is an overhead associated with it, and the client can decide whether it’s worth doing from their end.
  6. You can’t have asynchronous properties, except indirectly via asynchronous methods. Avoid this.
  7. async void is for event handlers. Even so, if ordering is important, beware of interleaving.