# Indexing and Searching Geopolygons using ElasticSearch

ElasticSearch is great for indexing and searching text, but it also has a lot of functionality related to searching points and regions on the world map. In this article, we’ll learn how to index polygons corresponding to territories in the world, and find whether a point is in any indexed polygon.

# Building Polygons with Geocoordinates

Back in school, we (hopefully) learned that a point in 2D space can be represented as an (x, y) pair of coordinates. A point in the world can similarly be identified by a (latitude, longitude) pair of geocoordinates. We can obtain geocoordinates for a location by clicking on the map in Google Maps or similar tools.

The analogy is not perfect though; geocoordinates are not linear, which is a result of the curvature of the Earth. This is not really important for us; the point is that we can represent any given point on the Earth’s surface by means of latitude and longitude.

Once we can identify points, it’s natural to extend the concept to 2D geometry. By taking several points, we can create polygons that mark the boundaries of a given territory, such as a country or state. Jeremy Hawes’ Google Maps Polygon Coordinates Tool is great for building such polygons.

Using this tool, we can very easily construct a rough polygon representing the state of Wyoming in the US. Wyoming is great to use as a simple example because it’s roughly rectangular, so we only need four points for a workable approximation.

Below the map in this polygon tool, you’ll get the coordinates of the points along with some extra JavaScript (which you could later paste directly into the code editor). In this case, we’ve got the following coordinates in (latitude, longitude) format:

```45.01967,-104.04405
44.99904,-111.03084
41.011,-111.04131
41.00193,-104.03375
```

Once we have the points that make up the polygon, we can feed them into Elasticsearch.

# Indexing Geopolygons in Elasticsearch

Before we can index anything, we need to create a mapping that defines the structure of an index, including any fields and their data types. The Mapping Geo Shapes page in the Elasticsearch documentation provides a starting point. However, the documentation is crap, and if you follow the example in the docs closely, you’ll get an error:

After a quick search, this Stack Overflow answer reveals the cause of the problem: Elasticsearch no longer likes the string data type, and expects you to use text instead. This wouldn’t have been a problem if they bothered to update their documentation once in a while. Anyhow, our mapping request for this example will be as follows:

```PUT /regions
{
"mappings": {
"region": {
"properties": {
"name": {
"type": "text"
},
"location": {
"type": "geo_shape"
}
}
}
}
}
```

This essentially means that each region item in the regions index will have a name and a location, the latter being the polygon itself. While we will be focusing exclusively on polygons in this article, it is worth noting that the `geo_shape` data type supports a lot of other geometric constructs – refer to the Geo-Shape documentation for more information.

Once our mapping is in place, we can proceed to index our polygons. The Indexing Geo Shapes documentation page shows how to do this. There’s a catch though: Elasticsearch expects to receive coordinates in (longitude, latitude) format, which is is the reverse of what we’ve been using so far. We can use a simple regular expression (e.g. in Notepad++) to swap our coordinates:

```(\-?\d+\.?\d*),(\-?\d+\.?\d*)
\2,\1
```

The first line shows the regular expression that is used to match coordinates, and the second like shows what it should be replaced by, i.e. swapped coordinates.

Let’s use the following query to try to index our Wyoming polygon:

```PUT /regions/region/wyoming
{
"name" : "Wyoming",
"location" : {
"type" : "polygon",
"coordinates" : [[
[ -104.04405,45.01967 ],
[ -111.03084,44.99904 ],
[ -111.04131,41.011   ],
[ -104.03375,41.00193 ]
]]
}
}
```

This actually fails with an error:

This is because Elasticsearch expects the polygon to be closed, i.e. it must return to the starting point. Another thing to watch out for is any polygons that have self-intersections, which Elasticsearch doesn’t allow either.

We can fix our error by simply repeating the first coordinate at the end:

```PUT /regions/region/wyoming
{
"name" : "Wyoming",
"location" : {
"type" : "polygon",
"coordinates" : [[
[ -104.04405,45.01967 ],
[ -111.03084,44.99904 ],
[ -111.04131,41.011   ],
[ -104.03375,41.00193 ],
[ -104.04405,45.01967 ]
]]
}
}
```

It should work now:

Great! Our Wyoming polygon is now in Elasticsearch.

# Querying Geopolygons in Elasticsearch

We can again turn to the Elasticsearch documentation for examples of how to query our geopolygon. We can do this by taking a circle with a given radius and seeing whether it intersects the polygon, as shown in Querying Geo Shapes. Don’t confuse this with the Geo Polygon Query documentation, which is actually the opposite of our situation (i.e. having a point in Elasticsearch, and providing the polygon to test against at query time).

To test this, we’ll pick a point somewhere in Wyoming. I used Google Maps to pick a point within Yellowstone National Park, which for all we know might just be where Yogi Bear lives:

Having obtained the coordinates, we can hit Elasticsearch with a query:

```GET /regions/region/_search
{
"query": {
"geo_shape": {
"location": {
"shape": {
"type":   "circle",
"coordinates": [
-109.874838, 44.439550
]
}
}
}
}
}
```

And you’ll see that Wyoming is actually returned in the results:

You’ll also notice that Elasticsearch gave us back all the coordinate data which we don’t really care about in this case. This can be pretty inefficient if you’re using very large and detailed polygons. We can filter that out by specifying the `_source` property:

```GET /regions/region/_search
{
"_source": "name",
"query": {
"geo_shape": {
"location": {
"shape": {
"type":   "circle",
"coordinates": [
-109.874838, 44.439550
]
}
}
}
}
}
```

The results are now nice and clean:

Next, we’ll take a point in Texas and see that we don’t get results for that:

# Geopolygons with Holes

Some territories aren’t simple polygons; they contain other territories inside them, and so the polygon has a hole. Examples include:

• Rome (Vatican City is a hole within it)
• New South Wales (Australian Capital Territory is a hole within it)
• South Africa (Lesotho is a hole within it)

The Indexing Geo Shapes documentation page (which we’ve referred to earlier) explains how to account for holes in polygons you index. Let’s see how this works using a practical example.

The above image shows what New South Wales, Australia looks like in Google Maps. Notice the Australian Capital Territory state inside it. Using Jeremy Hawes’ aforementioned polygon tool, we can draw a very rough polygon for New South Wales:

This gives us the following coordinates (lat, lon) for New South Wales:

```-28.92704,141.04445
-33.97411,141.00841
-37.51381,149.94544
-34.98252,150.7789
-32.70393,152.18365
-28.24141,153.49901
-28.98426,148.87874
```

We will also need a polygon for Australian Capital Territory. Again, this will be a really rough approximation just for the sake of example:

Our coordinates for Australian Capital Territory are:

```-35.91185,149.05898
-35.36119,149.14473
-35.31932,149.40076
-35.11429,149.09984
-35.3126,148.80286
-35.71989,148.81557
```

Next, we’ll index Australian Capital Territory. This is nothing new, but remember that we must take care to swap the coordinates so that become (lon, lat), and close the polygon by repeating the first coordinate pair at the end.

```PUT /regions/region/act
{
"name" : "Australian Capital Territory",
"location" : {
"type" : "polygon",
"coordinates" : [[
[ 149.05898,-35.91185 ],
[ 149.14473,-35.36119 ],
[ 149.40076,-35.31932 ],
[ 149.09984,-35.11429 ],
[ 148.80286,-35.3126  ],
[ 148.81557,-35.71989 ],
[ 149.05898,-35.91185 ]
]]
}
}
```

For New South Wales, we do something special: we give it two polygons.

```PUT /regions/region/nsw
{
"name" : "New South Wales",
"location" : {
"type" : "polygon",
"coordinates" : [
[
[ 141.04445,-28.92704 ],
[ 141.00841,-33.97411 ],
[ 149.94544,-37.51381 ],
[ 150.7789, -34.98252 ],
[ 152.18365,-32.70393 ],
[ 153.49901,-28.24141 ],
[ 148.87874,-28.98426 ],
[ 141.04445,-28.92704 ]
],
[
[ 149.05898,-35.91185 ],
[ 149.14473,-35.36119 ],
[ 149.40076,-35.31932 ],
[ 149.09984,-35.11429 ],
[ 148.80286,-35.3126  ],
[ 148.81557,-35.71989 ],
[ 149.05898,-35.91185 ]
]
]
}
}
```

The first polygon is the New South Wales polygon. The second is the one for Australian Capital Territory. The way Elasticsearch interprets this is that the first polygon is the main one; all subsequent ones are holes in the main polygon.

Once this has also been indexed, we can test this. Remember to swap your coordinates – Google Maps uses (lat, lon) whereas Elasticsearch uses (lon, lat). Let’s take a point in New South Wales – somewhere in Sydney for instance:

Our point was correctly identified as being in New South Wales. Now, let’s take a point in Canberra so that we can test out Australian Capital Territory:

Elasticsearch correctly returned Australian Capital Territory in the results. What is even more significant is that it did not return New South Wales, which it would otherwise have done had we not specified the hole when we indexed it.

# Summary

After a brief introduction to geocoordinates and geopolygons, we saw how we can index geopolygons in Elasticsearch and then run queries to find out in which polygon(s) a point belongs. In a slightly more advanced scenario, we saw how to deal with polygons that have holes.

# Asynchronous RabbitMQ Consumers in .NET

It’s quite common to do some sort of I/O operation (e.g. REST call) whenever a message is consumed by a RabbitMQ client. This should be done asynchronously, but it’s not as simple as changing the event handling code to `async void`.

In “The Dangers of async void Event Handlers“, I explained how making an event handler `async void` will mess up the message order, because the dispatcher loop will not wait for a message to be fully processed before calling the handler on the next one.

While that article provided a workaround that is great to use with older versions of the RabbitMQ Client library, it turns out that there is an AsyncEventingBasicConsumer as from RabbitMQ.Client 5.0.0-pre3 which works great for asynchronous message consumption.

# AsyncEventingBasicConsumer Example

First, we need to make sure that the RabbitMQ client library is installed.

```Install-Package RabbitMQ.Client
```

Then, we can set up a publisher and consumer to show how to use the `AsyncEventingBasicConsumer`. Since this is just a demonstration, we can have both in the same process:

```        static void Main(string[] args)
{
var factory = new ConnectionFactory() { DispatchConsumersAsync = true };
const string queueName = "myqueue";

using (var connection = factory.CreateConnection())
using (var channel = connection.CreateModel())
{
channel.QueueDeclare(queueName, true, false, false, null);

// consumer

var consumer = new AsyncEventingBasicConsumer(channel);
channel.BasicConsume(queueName, true, consumer);

// publisher

var props = channel.CreateBasicProperties();
int i = 0;

while (true)
{
var messageBody = Encoding.UTF8.GetBytes(\$"Message {++i}");
channel.BasicPublish("", queueName, props, messageBody);
}
}
}
```

There is nothing really special about the above code except that we’re using `AsyncEventingBasicConsumer` instead of `EventingBasicConsumer`, and that the `ConnectionFactory` is now being set up with a suspicious-looking `DispatchConsumersAsync` property set to `true`. The `ConnectionFactory` is using defaults, so it will connect to localhost using the guest account.

The message handler is expected to return `Task`, and this makes it very easy to use proper asynchronous code:

```        private static async Task Consumer_Received(object sender, BasicDeliverEventArgs @event)
{
var message = Encoding.UTF8.GetString(@event.Body);

Console.WriteLine(\$"Begin processing {message}");

Console.WriteLine(\$"End processing {message}");
}
```

The messages are indeed processed in order:

# How to Mess This Up

Remember that `DispatchConsumersAsync` property? I haven’t really found any documentation explaining what it actually does, but we can venture a guess after a couple of experiments.

First, let’s keep that property, but use a synchronous `EventingBasicConsumer` instead (which also means changing the event handler to have a `void` return type). When we run this, we get an error:

It says “In the async mode you have to use an async consumer”. Which I suppose is fair enough.

So now, let’s go back to using an `AsyncEventingBasicConsumer`, but leave out the `DispatchConsumersAsync` property:

```var factory = new ConnectionFactory();
```

This time, you’ll see that the the event handler is not firing (nothing is being written to the console). The messages are indeed being published, and the queue is remaining at zero messages, so they are being consumed (you’ll see them accumulate if you disable the consumer).

This is actually quite dangerous, yet there is no error like the one we saw earlier. It means that if a developer forgets to set that `DispatchConsumersAsync` property, then all messages are lost. It’s also quite strange that the choice of how to dispatch messages to the consumer (i.e. sync or async) is a property of the connection rather than the consumer, although presumably it would be a result of some internal plumbing in the RabbitMQ Client library.

# Summary

`AsyncEventingBasicConsumer` is great for having pure asynchronous RabbitMQ consumers, but don’t forget that `DispatchConsumersAsync` property.

It’s only available since RabbitMQ.Client 5.0.0-pre3, so if you’re on an older version, use the workaround described in “The Dangers of async void Event Handlers” instead.

# Simple Ultima-Style Dialogue Engine in C#

The Ultima series is one of the most influential RPG series of all time. It is known for open worlds, intricate plots, ethical choices as opposed to “just kill the bad guy”, and… dialogue. The dialogue of the Ultima series went from being simple one-liners to complex dialogue trees with scripted side-effects.

Ultima 4-6, as well as the two Worlds of Ultima games (which used the Ultima 6 engine), used a simple keyword-based dialogue engine.

In these games, conversing with NPCs (people) involved typing in a number of common keywords such as “name” or “job”, and entering new keywords based on their responses in order to develop the conversation. Only the first four characters were taken into consideration, so “batt” and “battle” would yield the same result. “Bye” or an empty input ends the conversation, and any unrecognised keyword results in a fixed default response.

In Ultima 4, conversations were restricted to “name”, “job”, “health”, as well as two other NPC-specific keywords. For each NPC, one keyword would also trigger a question, to which you had to answer “yes” or “no”, and the NPC would respond differently based on your answer. You can view transcripts for or interact with almost all Ultima 4 dialogues on my oldest website, Dino’s Ultima Page, to get an idea how this works.

Later games improved this dialogue engine by highlighting keywords, adding more NPC-specific keywords, allowing multiple keywords to point to the same response, and causing side effects such as the NPC giving you an item.

If we focus on the core aspects of the dialogue engine, it is really simple to build something similar in just about any programming language you like. In C#, we could use a dictionary to hold the input keywords and the matching responses:

```            var dialogue = new Dictionary<string, string>()
{
["name"] = "My name is Tom.",
["job"] = "I chase Jerry.",
["heal"] = "I am hungry!",
["jerr"] = "Jerry the mouse!",
["hung"] = "I want to eat Jerry!",
["bye"] = "Goodbye!",
["default"] = "What do you mean?"
};
```

We then loop until the conversation is over:

```            string input = null;
bool done = false;

while (!done)
{
// the rest of the code goes here
}
```

We accept input, and then process it to make it really easy to just index the dictionary later:

```                Console.Write("You say: ");
if (input.Length > 4)
input = input.Substring(0, 4);
```

Whitespace around the input is trimmed off, and the input is converted to lowercase to match how we are storing the keywords in the dictionary’s keys. If the input is longer than 4 characters, we truncate it to the first four characters.

```                if (input == string.Empty)
input = "bye";

if (input == "bye")
done = true;
```

An empty input or “bye” will break out of the loop, ending the conversation.

```                if (dialogue.ContainsKey(input))
Console.WriteLine(dialogue[input]);
else
Console.WriteLine(dialogue["default"]);
```

The above code is the heart of the dialogue engine. It simply checks whether the input matches a known keyword. If it does, it returns the corresponding response. If not, it returns the “default” response. Note that this “default” response could not otherwise be obtained by normal means (for example, typing “default” as input) since the input is always being truncated to a maximum of four characters.

As you can see, it takes very little to add a really simple dialogue engine to your game. This might not have all the features that the Ultima games had, but serves as an illustration on how to get started.

# Anecdotes on Trust and Management Interference

Companies today place huge efforts and resources into hiring. Hiring the right people is important in the short term, but retaining them is an important long-term goal. Getting this wrong means that we are stuck in an expensive hiring loop, and that a good chunk of our people’s productivity is lost as the newcomers have to get up to speed, and the resident employees have to train them.

Note: The idea for this article resulted from a visit to Greenwich, where the columns of the Queen’s House were reminiscent of the story of Sir Christopher Wren and the Windsor Guildhall. It has nothing to do with any particular work experience I’ve had.

# Trust

An ideal scenario for building a successful team could look something like this:

• “Get the right people.
• Make them happy so they don’t want to leave.
• Turn them loose.”

— Peopleware: Productive Projects and Teams (page 91), Third Edition. DeMarco & Lister. Addison-Wesley, 2003.

If you are not hiring the right people, then you are getting yourself into a long-term problem that is very hard to fix. Conversely, if you are hiring the right people, then you should be able to trust them to do their work.

“Creating a pool of autonomous teams and letting them loose without leadership is a recipe for disaster.”

— The Pragmatic Programmer: From Journeyman to Master. Hunt and Thomas. Addison-Wesley, 1999.

Trust has nothing to do with absence of leadership, supervision, or direction. It simply means taking a step back and letting the people do what they are good at, and pitching in to give direction and support, but not getting involved in the little details of how they do their work (micromanagement).

“The most obvious defensive management ploys are prescriptive Methodologies (“My people are too dumb to build systems without them.”) and technical interference by the manager. Both are doomed to fail in the long run. In addition, they make for efficient teamicide. People who feel untrusted have little inclination to bond together into a cooperative team.”

— Peopleware: Productive Projects and Teams (page 145), Third Edition. DeMarco & Lister. Addison-Wesley, 2003.

There really is no point in hiring great people if we don’t trust them to do what they are good at. This means that great resources are being underutilitised. They will often notice this themselves, and leave for companies that can better realize their potential.

# Methodologies

The previous quote from Peopleware says a lot about the way we work today, especially in the context of agile methodologies. I have already written how standups are excessively childish and counterproductive, and that important management responsibilities are being delegated to “Scrum masters” or “agile coaches” who are neither personally invested nor have the right abilities to give direction to the team.

“The maddening thing about most of our organizations is that they are only as good as the people who staff them. Wouldn’t it be nice if we could get around that natural limit, and have good organizations even though they were staffed by mediocre or incompetent people? Nothing could be easier–all we need is (trumpet fanfare, please) a Methodology.

“A Methodology is a general systems theory of how a whole class of thought-intensive work ought to be conducted. It comes in the form of a fat book that specifies in detail exactly what steps to take at any time, regardless of who’s doing the work, regardless of where or when. The people who write the Methodology are smart. The people who carry it out can be dumb. They never have to turn their brains to the ON position. All they do is start on page one and follow the Yellow Brick Road, like happy little Munchkins, all the way from the start of the job to its successful completion. The Methodology makes all the decisions; the people make none. The organization becomes entirely deterministic.”

— Peopleware: Productive Projects and Teams (page 176), Third Edition. DeMarco & Lister. Addison-Wesley, 2003.

It is a real shame how the Agile Manifesto, a simple set of guidlines focused on reducing bureaucracy, adopting flexibility and enhancing productivity, is often corrupted into a set of dogmatic rites. People are encouraged to follow procedure rather than to think, communicate and work effectively. That’s the exact opposite of what the Agile Manifesto suggests.

And why else would people not be encouraged to think of their own accord, and take initiative, if not due to a lack of trust?

In the next sections, we’ll look at some historical stories relating to the problem of trust. In each of these stories, experts were commissioned to carry out a piece of work for which they were renowned. Each story shows how these experts cleverly defied silly orders (resulting from lack of trust and/or incompetence) in order to prove a point.

# The Columns at the Windsor Guildhall

Image credit: taken from here.

There are four columns inside the Windsor Guildhall that seem to be there just for decoration: there is a gap between the top of the column, and the ceiling.

As the story goes, Sir Christopher Wren, the architect responsible for the Guildhall, was 100% confident in his design for an open space. However, the councillors insisted that he add some more columns on the inside of the building in order to support its weight.

“[Sir Christopher Wren] said the columns were unnecessary. Eventually, however, he relented, and built the columns.

“But to prove he was right, the columns were constructed so they did not touch the ceiling, and were merely decoration.”

— “Is this an architect’s 300-year-old hoax?“, Deceptology

While doubts have been cast on the veracity of this story and on Sir Christopher Wren’s role in the Windsor Guildhall’s design, it serves as an illustration of how an experienced professional alienates authority in order to prove a point.

# David by Michelangelo

There is a similar story about how Michelangelo, satisfied with his work on the statue of David, dealt with criticism from the man who commissioned the project.

“There were no detractors, although the sponsor of the project, Piero Soderini did make a suggestion that was slyly dealt with by Michelangelo. Soderini, commented that David’s nose was too thick so Michelangelo climbed the scaffolding to attend to the problem.
The young sculptor pretended to alter the nose and even sprinkled marble dust to complete the effect. Michelangelo then asked Soderini’s for his opinion of the ‘new’ nose. ‘Ah, that’s much better,’, said Soderini. ‘Now you’ve really brought it to life.’

— “Michelangelo’s David“, Italy Magazine

If this story is true, Michelangelo took advantage of the fact that Soderini was artistically incompetent, and would not notice any difference in detail. This is very much like how errors in live performances often go unnoticed by those who are not musically trained.

The point is that Michelangelo was cunning enough to appease the man who commissioned his work, without compromising its quality. In the software development world, this problem often manifests itself in the form of client requirements which are in conflict with the nature of the product, and require radical changes while adding negligible value. Because of this, a very important aspect of the software professional’s work is to negotiate requirements, rather than just accepting them as-is.

# The Queen’s Duck

This last story is much closer to the software development world, and is probably true.

“This started as a piece of Interplay corporate lore. It was well known that producers (a game industry position, roughly equivalent to PMs) had to make a change to everything that was done. The assumption was that subconsciously they felt that if they didn’t, they weren’t adding value.

“The artist working on the queen animations for Battle Chess was aware of this tendency, and came up with an innovative solution. He did the animations for the queen the way that he felt would be best, with one addition: he gave the queen a pet duck. He animated this duck through all of the queen’s animations, had it flapping around the corners. He also took great care to make sure that it never overlapped the “actual” animation.

“Eventually, it came time for the producer to review the animation set for the queen. The producer sat down and watched all of the animations. When they were done, he turned to the artist and said, “that looks great. Just one thing – get rid of the duck.”

— “New Programming Jargon“, Coding Horror

The producer always wanted something removed, just to show that his role had a purpose. Thus, the developers had to choose the lesser evil between compromising the project’s quality, and wasting time on something extra that would serve as a decoy. This shows how incompetent managers can have a detrimental effect on a project, whether it is on quality, productivity, or morale.

# Conclusion

Give developers direction, but trust them to do their work. That’s why you hire good people. Good developers take pride in their work, and will not stick around an environment that stifles their creativity rather than empowering them.

# Compressing Strings Using GZip in C#

Compressing data is a great way to reduce its size. This helps us reduce storage requirements as well as the bandwidth and latency of network transmissions.

There are many different compression algorithms, but here, we’ll focus on GZip. We will use the .NET Framework’s own GZipStream class (in the `System.IO.Compression` namespace), although it is also possible to use a third party library such as SharpZipLib. We’ll also focus explicitly on compressing and decompressing strings; the steps to deal with other types (such as byte arrays or streams) will be a little different.

# Compressing Data with GZipStream

In its simplest form, GZipStream takes an underlying stream and a compression mode as parameters. The compression mode determines whether you want to compress or decompress; the underlying stream is manipulated according to that compression mode.

```            string inputStr = "Hello world!";
byte[] inputBytes = Encoding.UTF8.GetBytes(inputStr);

using (var outputStream = new MemoryStream())
{
using (var gZipStream = new GZipStream(outputStream, CompressionMode.Compress))
gZipStream.Write(inputBytes, 0, inputBytes.Length);

// TODO do something with the outputStream
}
```

In the code above, we are using a memory stream as our underlying output stream. The GZipStream effectively wraps the output stream. When we write our input data into the GZipStream, it goes into the output stream as compressed data. By wrapping the write operation in a `using` block by itself, we ensure that the data is flushed.

Let’s add some code to take the bytes from the output stream and write them to the console window:

```            string inputStr = "Hello world!";
byte[] inputBytes = Encoding.UTF8.GetBytes(inputStr);

using (var outputStream = new MemoryStream())
{
using (var gZipStream = new GZipStream(outputStream, CompressionMode.Compress))
gZipStream.Write(inputBytes, 0, inputBytes.Length);

var outputBytes = outputStream.ToArray();

var outputStr = Encoding.UTF8.GetString(outputBytes);
Console.WriteLine(outputStr);

}
```

The output of this may be a little bit surprising:

The bytes resulting from the GZip compression are actually binary data. They are not intelligible when rendered, and may also cause problems when transmitted over a network (due to byte ordering, for instance). One way to deal with this is to encode the compressed bytes in base64:

```            string inputStr = "Hello world!";
byte[] inputBytes = Encoding.UTF8.GetBytes(inputStr);

using (var outputStream = new MemoryStream())
{
using (var gZipStream = new GZipStream(outputStream, CompressionMode.Compress))
gZipStream.Write(inputBytes, 0, inputBytes.Length);

var outputBytes = outputStream.ToArray();

var outputbase64 = Convert.ToBase64String(outputBytes);
Console.WriteLine(outputbase64);

}
```

Update 28th January 2018: As some people pointed out, it is not necessary to base64-encode compressed data, and it will transmit fine over a network even without it. However I do recall having issues transmitting binary compressed data via RabbitMQ, so you may want to apply base64 encoding as needed in order to render compressed data or work around issues.

Base64, however, is far from a compact representation. In this specific example, the length of the output string goes from 32 bytes (binary) to 44 (base64), reducing the effectiveness of compression. However, for larger strings, this still represents significant savings over the plain, uncompressed string.

Which brings us to the next question: why is our compressed data much larger than our uncompressed data (12 bytes)? While I don’t know how the GZip algorithm works internally, compression algorithms generally work best on larger data where there is a lot of repetition. On a very small string, the overhead required to represent the compressed format’s internal data structures dwarfs the data itself, negating benefits of compression. Thus, compression should typically be applied only to data whose length exceeds an arbitrary threshold.

# Decompressing Data with GZipStream

When decompressing, the underlying stream is an input stream. The GZipStream still wraps it, but the flow is inverted so that when you read data from the GZipStream, it translates compressed data into uncompressed data.

The basic workflow looks something like this:

```            string inputStr = "H4sIAAAAAAAAC/NIzcnJVyjPL8pJUQQAlRmFGwwAAAA=";
byte[] inputBytes = Convert.FromBase64String(inputStr);

using (var inputStream = new MemoryStream(inputBytes))
using (var gZipStream = new GZipStream(inputStream, CompressionMode.Decompress))
{
}
```

There are different ways to implement this, even if we just focus on decompressing from a string to a string. However, a low-level buffer read such as the following will not work:

The `Length` property is not supported in a GZipStream, so the above code gives a runtime error. We cannot use the length of the `inputStream` in its stead because it will generally not be the same (it does match for this “Hello World!” example, but it won’t if you try a longer string). Rather than read the entire length of the buffer, you could read block by block until you reach the end of the stream. But that’s more work than you need, and I’m lazy.

One way to get this working with very little effort is to introduce a third stream, and copy the GZipStream into it:

```            string inputStr = "H4sIAAAAAAAAC/NIzcnJVyjPL8pJUQQAlRmFGwwAAAA=";
byte[] inputBytes = Convert.FromBase64String(inputStr);

using (var inputStream = new MemoryStream(inputBytes))
using (var gZipStream = new GZipStream(inputStream, CompressionMode.Decompress))
using (var outputStream = new MemoryStream())
{
gZipStream.CopyTo(outputStream);
var outputBytes = outputStream.ToArray();

string decompressed = Encoding.UTF8.GetString(outputBytes);

Console.WriteLine(decompressed);
}
```

An even more concise approach is to use `StreamReader`:

```            string inputStr = "H4sIAAAAAAAAC/NIzcnJVyjPL8pJUQQAlRmFGwwAAAA=";
byte[] inputBytes = Convert.FromBase64String(inputStr);

using (var inputStream = new MemoryStream(inputBytes))
using (var gZipStream = new GZipStream(inputStream, CompressionMode.Decompress))
{

Console.WriteLine(decompressed);
}
```

…and without too much effort, we have our decompressed output:

Now again, your mileage may vary depending on what you’re doing. For instance, you might opt to use the asynchronous versions of stream manipulation methods if you’re dealing with streams that aren’t memory streams (e.g. a file). Or you might want to work exclusively with bytes rather than converting back to a string. In any case, hopefully the code in this article will give you a head start when you need to compress and decompress some data.

# Third Anniversary

It seems like it was yesterday since Gigi Labs was launched, and yet, that happened three years ago today.

In the past year, a lot of new articles were added, and there are now over 200 articles in this site. I’ve added pages under the Writings section for each of the more important series. In there, you’ll now find the progress of the Programmer’s Ranch migration, as well as two highly successful series launched this year: The Sorry State of the Web, and C# Asynchronous Programming.

The Sorry State of the Web started as a result of my frustration with so-called professional websites, including major US airlines. In time I ran into so many issues that I published one or two articles a month for six months. While this was entertaining sport, the web didn’t get any better as a result, and I figured I could use my time more wisely elsewhere. For instance, by writing high-quality articles to help people write good software.

And so, this month, I decided to write about C# Asynchronous Programming. I observed over the years that this was a topic that .NET developers found particularly difficult to grasp, and I decided to put in writing what I had been explaining over and over again to different people. While there was too much to say for a single article, the series that resulted from this effort was phenomenally successful. In fact, this month has been Gigi Labs’ best month ever in terms of traffic, by a huge stretch.

Other articles of note in the past year include my article On Daily Standups (which, as I expected, was quite controversial), various articles on Mirosoft Orleans (including those on Persistence, parts of which were contributed to the official Microsoft Orleans documentation), and articles on .NET Core / .NET Standard which helped to alleviate a lot of the confusion that many people had with what is now a family of frameworks.

In the meantime, I have removed the page that served as a quick reference to interest rates offered by Maltese banks. Since I had no more time to maintain this anyway, removing it helped Gigi Labs keep its focus.

I wish I could give some kind of idea of what’s in store for Gigi Labs in the coming year, but the truth is that I don’t know. For one thing, I would like to get back to game development – both because that is where my heart has always been, and because I feel that making games is really the best way to teach a lot of programming concepts. But I’m not making promises at this stage – we’ll see how things play out (pun not intended) in the coming months.

Once again, thanks for your support over the past 3 years!

# Abstracting RabbitMQ RPC with TaskCompletionSource

I recently wrote about TaskCompletionSource, a little-known tool in .NET that is great for transforming arbitrary asynchrony into the Task-Based Asynchronous Pattern. That means you can hide the whole thing behind a simple and elegant `async`/`await`.

In this article, we’ll see this in practice as we implement the Remote Procedure Call (RPC) pattern in RabbitMQ. This is a fancy way of saying request/response, except that it all happens asynchronously! That’s right. No blocking.

The RabbitMQ.Client NuGet package is necessary to make this code work. The client is written using an asynchronous `Main()` method, which requires at least C# 7.1 to compile.

# RabbitMQ RPC Overview

You can think of RPC as request/response communication. We have a client asking a server to process some input and return the output in its response. However, this all happens asynchronously. The client sends the request on a request queue and forgets about it, rather than waiting for the response. Eventually, the server will (hopefully) process the request and send a response message back on a response queue.

The request and response can be matched on the client side by attaching a CorellationId to both the request and the response.

In this context, we don’t really talk about publishers and consumers, as is typical when talking about messaging frameworks. That’s because in order to make this work, both the client and the server must have both a publisher and a consumer.

# Client: Main Program

For our client application, we’ll have the following main program code. We will implement an RpcClient that will hide the request/response plumbing behind a simple Task that we then `await`:

```        static async Task Main(string[] args)
{
Console.Title = "RabbitMQ RPC Client";

using (var rpcClient = new RpcClient())
{
Console.WriteLine("Press ENTER or Ctrl+C to exit.");

while (true)
{
string message = null;

Console.Write("Enter a message to send: ");
using (var colour = new ScopedConsoleColour(ConsoleColor.Blue))

if (string.IsNullOrWhiteSpace(message))
break;
else
{
var response = await rpcClient.SendAsync(message);

Console.Write("Response was: ");
using (var colour = new ScopedConsoleColour(ConsoleColor.Green))
Console.WriteLine(response);
}
}
}
}
```

The program continuously asks for input, and sends that input as the request message. The server will process this message and return a response. Note that we are using the ScopedConsoleColour class from my “Scope Bound Resource Management in C#” article to colour certain sections of the output. Here is a taste of what it will look like:

While this console application only allows us to send one request at a time, the underlying approach is really powerful with APIs that can concurrently serve a multitude of clients. It is asynchronous and can scale pretty well, yet the consuming code sees none of the underlying complexity.

# Client: Request Sending

The heart of this abstraction is the RpcClient class. In the constructor, we set up the typical plumbing: create a connection, channel, queues, and a consumer.

```    public class RpcClient : IDisposable
{
private bool disposed = false;
private IConnection connection;
private IModel channel;
private EventingBasicConsumer consumer;
private ConcurrentDictionary<string,

private const string requestQueueName = "requestqueue";
private const string responseQueueName = "responsequeue";
private const string exchangeName = ""; // default exchange

public RpcClient()
{
var factory = new ConnectionFactory() { HostName = "localhost" };

this.connection = factory.CreateConnection();
this.channel = connection.CreateModel();

this.channel.QueueDeclare(requestQueueName, true, false, false, null);
this.channel.QueueDeclare(responseQueueName, true, false, false, null);

this.consumer = new EventingBasicConsumer(this.channel);
this.channel.BasicConsume(responseQueueName, true, consumer);

this.pendingMessages = new ConcurrentDictionary<string,
}

// ...
}
```

A few other things to notice here:

1. We are keeping a dictionary that allow us to match responses with the requests that generated them, based on a CorrelationId. We have already seen this approach in “TaskCompletionSource by Example“.
2. This class implements IDisposable, as it has several resources that need to be cleaned up. While I don’t show the code for this for brevity’s sake, you can find it in the source code.
3. We are not using exchanges here, so using an empty string for the exchange name allows us to use the default exchange and publish directly to the queue.

The SendAsync() method, which we saw being used in the main program, is implemented as follows:

```        public Task<string> SendAsync(string message)
{
var correlationId = Guid.NewGuid().ToString();

this.pendingMessages[correlationId] = tcs;

this.Publish(message, correlationId);

}
```

Here, we are generating GUID to use as a CorrelationId, and we are adding an entry in the dictionary for this request. This dictionary maps the CorrelationId to a corresponding TaskCompletionSource. When the response arrives, it will set the result on this TaskCompletionSource, which enables the underlying task to complete. We return this underlying task, and that’s what the main program awaits. The main program will not be able to continue until the response is received.

In this method, we are also calling a private `Publish()` method, which takes care of the details of publishing to the request queue on RabbitMQ:

```        private void Publish(string message, string correlationId)
{
var props = this.channel.CreateBasicProperties();
props.CorrelationId = correlationId;

byte[] messageBytes = Encoding.UTF8.GetBytes(message);
this.channel.BasicPublish(exchangeName, requestQueueName, props, messageBytes);

using (var colour = new ScopedConsoleColour(ConsoleColor.Yellow))
Console.WriteLine(\$"Sent: {message} with CorrelationId {correlationId}");
}
```

While this publishing code is for the most part pretty standard, we are using two particular properties that are especially suited for the RPC pattern. The first is CorrelationId, where we store the CorrelationId we generated earlier, and which the server will copy and send back as part of the response, enabling this whole orchestration. The second is the ReplyTo property, which is used to indicate to the server on which queue it should send the response. We don’t need it for this simple example since we are always using the same response queue, but this property enables the server to dynamically route responses where they are needed.

# Server

The request eventually reaches a server which has a consumer waiting on the request queue. Its `Main()` method is mostly plumbing that enables this consumer to work:

```        private static IModel channel;

static void Main(string[] args)
{
Console.Title = "RabbitMQ RPC Server";

var factory = new ConnectionFactory() { HostName = "localhost" };

using (var connection = factory.CreateConnection())
{
using (channel = connection.CreateModel())
{
const string requestQueueName = "requestqueue";
channel.QueueDeclare(requestQueueName, true, false, false, null);

// consumer

var consumer = new EventingBasicConsumer(channel);
channel.BasicConsume(requestQueueName, true, consumer);

Console.WriteLine("Waiting for messages...");
Console.WriteLine("Press ENTER to exit.");
Console.WriteLine();
}
}
}
```

When a message is received, the `Consumer_Received` event handler processes the message:

```        private static void Consumer_Received(object sender, BasicDeliverEventArgs e)
{
var requestMessage = Encoding.UTF8.GetString(e.Body);
var correlationId = e.BasicProperties.CorrelationId;

var responseMessage = Reverse(requestMessage);
Publish(responseMessage, correlationId, responseQueueName);
}
```

In this example, the server’s job is to reverse whatever messages it receives. Thus, each response will contain the same message as in the corresponding request, but backwards. This reversal code is taken from this Stack Overflow answer. Although trivial to implement, this serves as a reminder that there’s no need to reinvent the wheel if somebody already implemented the same thing (and quite well, at that) before you.

```        public static string Reverse(string s)
{
char[] charArray = s.ToCharArray();
Array.Reverse(charArray);
return new string(charArray);
}
```

Having computed the reverse of the request message, and extracted both the CorrelationId and ReplyTo properties, these are all passed to the `Publish()` method which sends back the response:

```        private static void Publish(string responseMessage, string correlationId,
string responseQueueName)
{
byte[] responseMessageBytes = Encoding.UTF8.GetBytes(responseMessage);

const string exchangeName = ""; // default exchange
var responseProps = channel.CreateBasicProperties();
responseProps.CorrelationId = correlationId;

channel.BasicPublish(exchangeName, responseQueueName, responseProps, responseMessageBytes);

Console.WriteLine(\$"Sent: {responseMessage} with CorrelationId {correlationId}");
Console.WriteLine();
}
```

The response is sent back on the queue specified in the ReplyTo property of the request message. The response is also given the same CorrelationId as the request; that way the client will know that this response is for that particular request.

# Client: Response Handling

When the response arrives, the client’s own consumer event handler will run to process it:

```        private void Consumer_Received(object sender, BasicDeliverEventArgs e)
{
var correlationId = e.BasicProperties.CorrelationId;
var message = Encoding.UTF8.GetString(e.Body);

using (var colour = new ScopedConsoleColour(ConsoleColor.Yellow))

this.pendingMessages.TryRemove(correlationId, out var tcs);
if (tcs != null)
tcs.SetResult(message);
}
```

The client extracts the CorrelationId from the response, and uses it to get the TaskCompletionSource for the corresponding request. If the TaskCompletionSource is found, then its result is set to the content of the response. This causes the underlying task to complete, and thus the caller awaiting that task will be able to resume and work with the result.

If the TaskCompletionSource is not found, then we ignore the response, and there is a reason for this:

“You may ask, why should we ignore unknown messages in the callback queue, rather than failing with an error? It’s due to a possibility of a race condition on the server side. Although unlikely, it is possible that the RPC server will die just after sending us the answer, but before sending an acknowledgment message for the request. If that happens, the restarted RPC server will process the request again. That’s why on the client we must handle the duplicate responses gracefully, and the RPC should ideally be idempotent.” — RabbitMQ RPC tutorial

# Demo

If we run both the client and server, we can enter messages in the client, one by one. The client publishes each message on the request queue and waits for the response, at which point it allows the main program to continue by setting the result of that request’s TaskCompletionSource.

# Summary

A TaskCompletionSource has an underlying Task that can represent a pending request. By giving each request an ID, you can keep track of it as the corresponding response should carry the same ID. A mapping between request IDs and TaskCompletionSource can easily be kept in a dictionary. When a response arrives, its corresponding entry in the dictionary can be found, and the Task can be completed. Any client code awaiting this Task may then resume.

# SignalR Core: Hello World

SignalR is a library that brought push notifications to ASP .NET web applications. It abstracted away the complexity of dealing with websockets and other front-end technologies necessary for a web application to spontaneously push out updates to client applications, and provided an easy programming model.

Essentially, SignalR allows us to implement publish/subscribe on the server. Clients, which are typically (but not necessarily) webpages, subscribe to a hub, which can then push updates to them. These updates can be sent spontaneously by the server (e.g. stock ticker) or triggered by a message from a client (e.g. chat).

The old SignalR, however, is not compatible with ASP .NET Core. So if you wanted to have push notifications in your web application, you had to look elsewhere… until recently. Microsoft shipped their first alpha release of SignalR Core (SignalR for ASP .NET Core 2.0) a few weeks ago, and the second alpha was released just yesterday. They also have some really nice samples we can learn from.

This article explains how to quickly get started with SignalR Core, by means of a simple Hello World application that combines a simple server-side hub with a trivial JavaScript client. It is essentially the first example from my “Getting Started with SignalR“, ported to SignalR Core.

# Hello SignalR Core: Server Side

This example is based on SignalR Core alpha 2, and uses ASP .NET Core 2 targeting .NET Core 2. As this is pre-release software, APIs may change.

Let’s start off by creating a new ASP .NET Core Web Application in Visual Studio 2017. We can start off simple by using the Empty project template:

This project template should come with a reference to the Microsoft.AspNet.All NuGet package, giving you most of what you need to create our web application.

In addition to that, we’ll need to install the NuGet package for SignalR. Note that we need the `-Pre` switch for now because it is still prerelease.

```Install-Package Microsoft.AspNetCore.SignalR -Pre
```

```    public class HelloHub : Hub
{
{
return Clients.All.InvokeAsync("hello");
}
}
```

In SignalR Core, a class that inherits from Hub is able to communicate with any clients that are subscribed to it. This can be done in several ways: broadcast to all clients or all except one; send to a single client; or send to a specific group. In this case, we’re simply broadcasting a “hello” message to all clients.

In the Startup class, we need to remove the default “Hello world” code and register our Hub instead. It should look something like this:

```    public class Startup
{
// This method gets called by the runtime. Use this method to add services to the container.
public void ConfigureServices(IServiceCollection services)
{
}

// This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
if (env.IsDevelopment())
{
app.UseDeveloperExceptionPage();
}

app.UseFileServer();

app.UseSignalR(routes =>
{
routes.MapHub<HelloHub>("hello");
});
}
}
```

`UseSignalR()` is where we register the route by which our Hub will be accessed from the client side. `UseFileServer()` is there just to serve the upcoming HTML and JavaScript.

# Hello SignalR Core: Client Side

In order to have a webpage that talks to our Hub, we first need a couple of scripts. We’ll get these using npm, which you can obtain by installing Node.js if you don’t have it already.

```npm install @aspnet/signalr-client
npm install jquery
```

The first package is the client JavaScript for SignalR Core. At the time of writing this article, the file you need is called signalr-client-1.0.0-alpha2-final.js. The second package is jQuery, which is no longer required by SignalR Core, but will make life easier for our front-end code. Copy both signalr-client-1.0.0-alpha2-final.js and jquery.js into the wwwroot folder.

Next, add an index.html file in the wwwroot folder. Add references to the aforementioned scripts, a placeholder for messages (with ID “log” in this example), and a little script to wire things up:

```<!DOCTYPE html>
<html>
<meta charset="utf-8" />
<title>Hello SignalR Core!</title>
<script src="jquery.js"></script>
<script src="signalr-client-1.0.0-alpha2-final.js"></script>
<script type="text/javascript">
var connection = new signalR.HubConnection('/hello');

connection.on('hello', data => {
\$("#log").append("Hello <br />");
});

connection.start()
});
</script>
<body>
<div id="log"></div>
</body>
</html>
```

This JavaScript establishes a connection to the hub, registers a callback for when a “hello” message is received, and calls the `BroadcastHello()` method on the hub:

The way we implemented our Hub earlier, it will send a “hello” message to all connected clients.

Let’s give that a try now:

Good! The connection is established, and we’re getting something back from the server (i.e. the Hub). Let’s open a couple more browser windows at the same endpoint:

Here, we can see that each time a new window was opened, a new “hello” message was broadcasted to all connected clients. Since we are not holding any state, messages are sent incrementally, so newer clients that missed earlier messages will be showing fewer messages.

# The Chat Sample

If you want to see a more elaborate example, check out the Chat sample from the official SignalR Core samples:

The principle is the same, but the Chat sample is a little more interesting.

# Adding Swagger to an ASP .NET Core 2 Web API

If you develop REST APIs, then you have probably heard of Swagger before.

Swagger is a tool that automatically documents your Web API, and provides the means to easily interact with it via an auto-generated UI.

In this article, we’ll see how to add Swagger to an ASP .NET Core Web API. We’ll be using the .NET Core 2 SDK, so if you’re using an earlier version, your mileage may vary. We’re only covering basic setup, so check out ASP.NET Web API Help Pages using Swagger in the Microsoft documentation if you want to go beyond that.

# Project Setup

To make things easy, we’ll use one of the templates straight out of Visual Studio to get started. Go to File -> New -> Project… and select the ASP .NET Core Web Application template:

Next, pick the Web API template. You may want to change the second dropdown to ASP .NET Core 2.0, as it is currently not set to that by default.

You should now have a simple Web API project template with a ValuesController, providing an easy way to play with Web API out of the box.

```Install-Package Swashbuckle.AspNetCore
```

Then, we throw in some configuration in Startup.cs to make Swagger work. Replace “My API” with whatever your API is called.

```        // This method gets called by the runtime. Use this method to add services to the container.
public void ConfigureServices(IServiceCollection services)
{

{
c.SwaggerDoc("v1", new Info { Title = "My API", Version = "v1" });
});
}

// This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
if (env.IsDevelopment())
{
app.UseDeveloperExceptionPage();
}

app.UseSwagger();

app.UseSwaggerUI(c =>
{
c.SwaggerEndpoint("/swagger/v1/swagger.json", "My API V1");
});

app.UseMvc();
}
```

Note: if you’re starting with a more minimal project template, it is possible that you may need to install the Microsoft.AspNetCore.StaticFiles package for this to work. This package is already present in the project template we’re using above.

# Accessing Swagger

Let’s now run the web application. We should see a basic JSON response from the ValuesController:

If we now change the URL to http://localhost:<port>/swagger/, then we get to the Swagger UI:

Here, we can see a list of all our controllers and their actions. We can also open them up to interact with them.

# Summary

That’s all it takes to add Swagger to your ASP .NET Core Web API.

2. Configure Swagger in the startup class.
3. Access Swagger from http://localhost:<port>/swagger/.

In this article, we’ll learn how to use TaskCompletionSource. It’s one of those tools which you will rarely need to use, but when you do, you’ll be glad that you knew about it. Let’s dive right into it.

# Basic Usage

The source code for this section is in the TaskCompletionSource1 folder at the Gigi Labs BitBucket Repository.

Let’s create a new console application, and in `Main()`, we’ll have my usual workaround for running asynchronous code in a console application:

```        static void Main(string[] args)
{
Run();
}
```

In the `Run()` method, we have a simple example showing how TaskCompletionSource works:

```        static async void Run()
{

}
```

TaskCompletionSource is just a wrapper for a `Task`, giving you control over its completion. Thus, a `TaskCompletionSource<bool>` will contain a `Task<bool>`, and you can set the `bool` result based on your own logic.

Here, we are using TaskCompletionSource as a synchronization mechanism. Our main thread spawns off an operation and waits for its result, using the Task in the TaskCompletionSource. Even if the operation is not Task-based, it can set the result of the Task in the TaskCompletionSource, allowing the main thread to resume its execution.

Let’s add some diagnostic code so that we can understand what’s going on from the output:

```        static async void Run()
{
var stopwatch = Stopwatch.StartNew();

Console.WriteLine(\$"Starting... (after {stopwatch.ElapsedMilliseconds}ms)");

Console.WriteLine(\$"Waiting...  (after {stopwatch.ElapsedMilliseconds}ms)");

Console.WriteLine(\$"Done.       (after {stopwatch.ElapsedMilliseconds}ms)");

stopwatch.Stop();
}
```

And here is the output:

```Starting... (after 0ms)
Waiting...  (after 41ms)
Done.       (after 5072ms)
```

As you can see, the main thread waited until `tcs.SetResult(true)` was called; this triggered completion of the TaskCompletionSource’s underlying task (which the main thread was awaiting), and allowed the main thread to resume.

Aside from `SetResult()`, TaskCompletionSource offers methods to cancel a task or fault it with an exception. There are also safe `Try...()` equivalents:

# SDK Example

The source code for this section is in the TaskCompletionSource2 folder at the Gigi Labs BitBucket Repository.

One scenario where I found TaskCompletionSource to be extremely well-suited is when you are given a third-party SDK which exposes events. Imagine this: you submit an order via an SDK method, and it gives you an ID for that order, but not the result. The SDK goes off and does what it has to do to perhaps talk to an external service and have the order processed. When this eventually happens, the SDK fires an event to notify the calling application on whether the order was placed successfully.

We’ll use this as an example of the SDK code:

```    public class MockSdk
{
public event EventHandler<OrderOutcome> OnOrderCompleted;

public Guid SubmitOrder(decimal price)
{
var orderId = Guid.NewGuid();

// do a REST call over the network or something
new OrderOutcome(orderId, true)));

return orderId;
}
}
```

The `OrderOutcome` class is just a simple DTO:

```    public class OrderOutcome
{
public Guid OrderId { get; set; }
public bool Success { get; set; }

public OrderOutcome(Guid orderId, bool success)
{
this.OrderId = orderId;
this.Success = success;
}
}
```

Notice how `MockSdk`‘s `SubmitOrder` does not return any form of `Task`, and we can’t await it. This doesn’t necessarily mean that it’s blocking; it might be using another form of asynchrony such as the Asynchronous Programming Model or a messaging framework with a request-response fashion (such as RPC over RabbitMQ).

At the end of the day, this is still asynchrony, and we can use TaskCompletionSource to build a Task-based Asynchronous Pattern abstraction over it (allowing the application to simply `await` the call).

First, we start building a simple proxy class that wraps the SDK:

```    public class SdkProxy
{
private MockSdk sdk;

public SdkProxy()
{
this.sdk = new MockSdk();
this.sdk.OnOrderCompleted += Sdk_OnOrderCompleted;
}

private void Sdk_OnOrderCompleted(object sender, OrderOutcome e)
{
// TODO
}
}
```

We then add a dictionary, which allows us to relate each OrderId to its corresponding TaskCompletionSource. Using a ConcurrentDictionary instead of a normal Dictionary helps deal with multithreading scenarios without needing to lock:

```        private ConcurrentDictionary<Guid,
private MockSdk sdk;

public SdkProxy()
{
this.pendingOrders = new ConcurrentDictionary<Guid,

this.sdk = new MockSdk();
this.sdk.OnOrderCompleted += Sdk_OnOrderCompleted;
}
```

The proxy class exposes a `SubmitOrderAsync()` method:

```        public Task SubmitOrderAsync(decimal price)
{
var orderId = sdk.SubmitOrder(price);

Console.WriteLine(\$"OrderId {orderId} submitted with price {price}");

}
```

This method calls the underlying `SubmitOrder()` in the SDK, and uses the returned OrderId to add a new TaskCompletionSource in the dictionary. The TaskCompletionSource’s underlying `Task` is returned, so that the application can await it.

```        private void Sdk_OnOrderCompleted(object sender, OrderOutcome e)
{
string successStr = e.Success ? "was successful" : "failed";
Console.WriteLine(\$"OrderId {e.OrderId} {successStr}");

this.pendingOrders.TryRemove(e.OrderId, out var tcs);
tcs.SetResult(e.Success);
}
```

When the SDK fires a completion event, the proxy will remove the TaskCompletionSource from the pending order and set its result. The application awaiting the underlying task will resume and take a decision depending on the logic.

We can test this with the following program code in a console application:

```        static void Main(string[] args)
{
Run();
}

static async void Run()
{
var sdkProxy = new SdkProxy();

await sdkProxy.SubmitOrderAsync(10);
await sdkProxy.SubmitOrderAsync(20);
await sdkProxy.SubmitOrderAsync(5);
await sdkProxy.SubmitOrderAsync(15);
await sdkProxy.SubmitOrderAsync(4);
}
```

The output shows that the program did indeed wait for each order to complete before starting the next one:

```OrderId 3e2d4577-8bbb-46b7-a5df-2efec23bae6b submitted with price 10
OrderId 3e2d4577-8bbb-46b7-a5df-2efec23bae6b was successful
OrderId e22425b9-3aa3-48db-a40f-8b8cfbdcd3af submitted with price 20
OrderId e22425b9-3aa3-48db-a40f-8b8cfbdcd3af was successful
OrderId 3b5a2602-a5d2-4225-bbdb-10642a63f7bc submitted with price 5
OrderId 3b5a2602-a5d2-4225-bbdb-10642a63f7bc was successful
OrderId ffd61cea-343e-4a9c-a76f-889598a45993 submitted with price 15
OrderId ffd61cea-343e-4a9c-a76f-889598a45993 was successful
OrderId b443462c-f949-49b9-a6f0-08bbbb82fe7e submitted with price 4
OrderId b443462c-f949-49b9-a6f0-08bbbb82fe7e was successful
```

# Summary

Use TaskCompletionSource to adapt an arbitrary form of asynchrony to use Tasks, and enable elegant `async`/`await` usage.

Do not use it simply expose an asynchronous wrapper for a synchronous method. You should either not do that at all, or use Task.FromResult() instead.

If you’re concerned that the the asynchronous call might never resume, consider adding a timeout.