Category Archives: Software development

Adding Swagger to an ASP .NET Core 2 Web API

If you develop REST APIs, then you have probably heard of Swagger before.

Swagger is a tool that automatically documents your Web API, and provides the means to easily interact with it via an auto-generated UI.

In this article, we’ll see how to add Swagger to an ASP .NET Core Web API. We’ll be using the .NET Core 2 SDK, so if you’re using an earlier version, your mileage may vary. We’re only covering basic setup, so check out ASP.NET Web API Help Pages using Swagger in the Microsoft documentation if you want to go beyond that.

Project Setup

To make things easy, we’ll use one of the templates straight out of Visual Studio to get started. Go to File -> New -> Project… and select the ASP .NET Core Web Application template:

Next, pick the Web API template. You may want to change the second dropdown to ASP .NET Core 2.0, as it is currently not set to that by default.

Adding Swagger

You should now have a simple Web API project template with a ValuesController, providing an easy way to play with Web API out of the box.

To add Swagger, we first need to add a package:

Install-Package Swashbuckle.AspNetCore

Then, we throw in some configuration in Startup.cs to make Swagger work. Replace “My API” with whatever your API is called.

        // This method gets called by the runtime. Use this method to add services to the container.
        public void ConfigureServices(IServiceCollection services)
        {
            services.AddMvc();

            services.AddSwaggerGen(c =>
            {
                c.SwaggerDoc("v1", new Info { Title = "My API", Version = "v1" });
            });
        }

        // This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
        public void Configure(IApplicationBuilder app, IHostingEnvironment env)
        {
            if (env.IsDevelopment())
            {
                app.UseDeveloperExceptionPage();
            }

            app.UseSwagger();

            app.UseSwaggerUI(c =>
            {
                c.SwaggerEndpoint("/swagger/v1/swagger.json", "My API V1");
            });

            app.UseMvc();
        }

Note: if you’re starting with a more minimal project template, it is possible that you may need to install the Microsoft.AspNetCore.StaticFiles package for this to work. This package is already present in the project template we’re using above.

Accessing Swagger

Let’s now run the web application. We should see a basic JSON response from the ValuesController:

If we now change the URL to http://localhost:<port>/swagger/, then we get to the Swagger UI:

Here, we can see a list of all our controllers and their actions. We can also open them up to interact with them.

Summary

That’s all it takes to add Swagger to your ASP .NET Core Web API.

  1. Add the Swashbuckle.AspNetCore package.
  2. Configure Swagger in the startup class.
  3. Access Swagger from http://localhost:<port>/swagger/.

TaskCompletionSource by Example

In this article, we’ll learn how to use TaskCompletionSource. It’s one of those tools which you will rarely need to use, but when you do, you’ll be glad that you knew about it. Let’s dive right into it.

Basic Usage

The source code for this section is in the TaskCompletionSource1 folder at the Gigi Labs BitBucket Repository.

Let’s create a new console application, and in Main(), we’ll have my usual workaround for running asynchronous code in a console application:

        static void Main(string[] args)
        {
            Run();
            Console.ReadLine();
        }

In the Run() method, we have a simple example showing how TaskCompletionSource works:

        static async void Run()
        {
            var tcs = new TaskCompletionSource<bool>();

            var fireAndForgetTask = Task.Delay(5000)
                                        .ContinueWith(task => tcs.SetResult(true));

            await tcs.Task;
        }

TaskCompletionSource is just a wrapper for a Task, giving you control over its completion. Thus, a TaskCompletionSource<bool> will contain a Task<bool>, and you can set the bool result based on your own logic.

Here, we are using TaskCompletionSource as a synchronization mechanism. Our main thread spawns off an operation and waits for its result, using the Task in the TaskCompletionSource. Even if the operation is not Task-based, it can set the result of the Task in the TaskCompletionSource, allowing the main thread to resume its execution.

Let’s add some diagnostic code so that we can understand what’s going on from the output:

        static async void Run()
        {
            var stopwatch = Stopwatch.StartNew();

            var tcs = new TaskCompletionSource<bool>();

            Console.WriteLine($"Starting... (after {stopwatch.ElapsedMilliseconds}ms)");

            var fireAndForgetTask = Task.Delay(5000)
                                        .ContinueWith(task => tcs.SetResult(true));

            Console.WriteLine($"Waiting...  (after {stopwatch.ElapsedMilliseconds}ms)");

            await tcs.Task;

            Console.WriteLine($"Done.       (after {stopwatch.ElapsedMilliseconds}ms)");

            stopwatch.Stop();
        }

And here is the output:

Starting... (after 0ms)
Waiting...  (after 41ms)
Done.       (after 5072ms)

As you can see, the main thread waited until tcs.SetResult(true) was called; this triggered completion of the TaskCompletionSource’s underlying task (which the main thread was awaiting), and allowed the main thread to resume.

Aside from SetResult(), TaskCompletionSource offers methods to cancel a task or fault it with an exception. There are also safe Try...() equivalents:

SDK Example

The source code for this section is in the TaskCompletionSource2 folder at the Gigi Labs BitBucket Repository.

One scenario where I found TaskCompletionSource to be extremely well-suited is when you are given a third-party SDK which exposes events. Imagine this: you submit an order via an SDK method, and it gives you an ID for that order, but not the result. The SDK goes off and does what it has to do to perhaps talk to an external service and have the order processed. When this eventually happens, the SDK fires an event to notify the calling application on whether the order was placed successfully.

We’ll use this as an example of the SDK code:

    public class MockSdk
    {
        public event EventHandler<OrderOutcome> OnOrderCompleted;

        public Guid SubmitOrder(decimal price)
        {
            var orderId = Guid.NewGuid();

            // do a REST call over the network or something
            Task.Delay(3000).ContinueWith(task => OnOrderCompleted(this,
                new OrderOutcome(orderId, true)));

            return orderId;
        }
    }

The OrderOutcome class is just a simple DTO:

    public class OrderOutcome
    {
        public Guid OrderId { get; set; }
        public bool Success { get; set; }

        public OrderOutcome(Guid orderId, bool success)
        {
            this.OrderId = orderId;
            this.Success = success;
        }
    }

Notice how MockSdk‘s SubmitOrder does not return any form of Task, and we can’t await it. This doesn’t necessarily mean that it’s blocking; it might be using another form of asynchrony such as the Asynchronous Programming Model or a messaging framework with a request-response fashion (such as RPC over RabbitMQ).

At the end of the day, this is still asynchrony, and we can use TaskCompletionSource to build a Task-based Asynchronous Pattern abstraction over it (allowing the application to simply await the call).

First, we start building a simple proxy class that wraps the SDK:

    public class SdkProxy
    {
        private MockSdk sdk;

        public SdkProxy()
        {
            this.sdk = new MockSdk();
            this.sdk.OnOrderCompleted += Sdk_OnOrderCompleted;
        }

        private void Sdk_OnOrderCompleted(object sender, OrderOutcome e)
        {
            // TODO
        }
    }

We then add a dictionary, which allows us to relate each OrderId to its corresponding TaskCompletionSource. Using a ConcurrentDictionary instead of a normal Dictionary helps deal with multithreading scenarios without needing to lock:

        private ConcurrentDictionary<Guid,
            TaskCompletionSource<bool>> pendingOrders;
        private MockSdk sdk;

        public SdkProxy()
        {
            this.pendingOrders = new ConcurrentDictionary<Guid,
                TaskCompletionSource<bool>>();

            this.sdk = new MockSdk();
            this.sdk.OnOrderCompleted += Sdk_OnOrderCompleted;
        }

The proxy class exposes a SubmitOrderAsync() method:

        public Task SubmitOrderAsync(decimal price)
        {
            var orderId = sdk.SubmitOrder(price);

            Console.WriteLine($"OrderId {orderId} submitted with price {price}");

            var tcs = new TaskCompletionSource<bool>();
            this.pendingOrders.TryAdd(orderId, tcs);

            return tcs.Task;
        }

This method calls the underlying SubmitOrder() in the SDK, and uses the returned OrderId to add a new TaskCompletionSource in the dictionary. The TaskCompletionSource’s underlying Task is returned, so that the application can await it.

        private void Sdk_OnOrderCompleted(object sender, OrderOutcome e)
        {
            string successStr = e.Success ? "was successful" : "failed";
            Console.WriteLine($"OrderId {e.OrderId} {successStr}");

            this.pendingOrders.TryRemove(e.OrderId, out var tcs);
            tcs.SetResult(e.Success);
        }

When the SDK fires a completion event, the proxy will remove the TaskCompletionSource from the pending order and set its result. The application awaiting the underlying task will resume and take a decision depending on the logic.

We can test this with the following program code in a console application:

        static void Main(string[] args)
        {
            Run();
            Console.ReadLine();
        }

        static async void Run()
        {
            var sdkProxy = new SdkProxy();

            await sdkProxy.SubmitOrderAsync(10);
            await sdkProxy.SubmitOrderAsync(20);
            await sdkProxy.SubmitOrderAsync(5);
            await sdkProxy.SubmitOrderAsync(15);
            await sdkProxy.SubmitOrderAsync(4);
        }

The output shows that the program did indeed wait for each order to complete before starting the next one:

OrderId 3e2d4577-8bbb-46b7-a5df-2efec23bae6b submitted with price 10
OrderId 3e2d4577-8bbb-46b7-a5df-2efec23bae6b was successful
OrderId e22425b9-3aa3-48db-a40f-8b8cfbdcd3af submitted with price 20
OrderId e22425b9-3aa3-48db-a40f-8b8cfbdcd3af was successful
OrderId 3b5a2602-a5d2-4225-bbdb-10642a63f7bc submitted with price 5
OrderId 3b5a2602-a5d2-4225-bbdb-10642a63f7bc was successful
OrderId ffd61cea-343e-4a9c-a76f-889598a45993 submitted with price 15
OrderId ffd61cea-343e-4a9c-a76f-889598a45993 was successful
OrderId b443462c-f949-49b9-a6f0-08bbbb82fe7e submitted with price 4
OrderId b443462c-f949-49b9-a6f0-08bbbb82fe7e was successful

Summary

Use TaskCompletionSource to adapt an arbitrary form of asynchrony to use Tasks, and enable elegant async/await usage.

Do not use it simply expose an asynchronous wrapper for a synchronous method. You should either not do that at all, or use Task.FromResult() instead.

If you’re concerned that the the asynchronous call might never resume, consider adding a timeout.

Common Mistakes in Asynchronous Programming with .NET

In the last few articles, we have seen how to work with asynchronous programming in C#. Although it is now easier than ever to write responsive applications that do asynchronous, non-blocking I/O operations, many people still use asynchronous programming incorrectly. A lot of this is due to confusion over usage of the Task class in .NET, which is used in multithreaded and parallel scenarios as well as asynchronous ones. To make matters worse, it is not obvious to everyone that these are actually different things.

So let’s address this concern first.

Update 16th October 2017: Several people have pointed out errors in this article with regards to the effect that blocking has on the CPU. I apologise for this, and have made corrections. Blocking does not hog the CPU, but prevents threads from doing other work while they wait. Thanks for clarifying the confusion, and I welcome any further corrections.

Update 8th March 2018: See also “Avoid await in Foreach” for another common mistake with async/await.

Asynchronous vs Multithreading etc

When we use a computer, many programs are running at the same time. Before the advent of multicore CPUs, this was achieved by having instructions from different processes (and eventually threads) running one at a time on the same CPU. These instructions are interleaved, and the CPU switches rapidly from one to another (in what we call a context switch), giving the illusion that they are running at the same time. This is concurrency.

CPUs with multiple cores have the additional ability of literally executing multiple intructions at the same time, which is parallel execution. Multithreading gives the developers to make full use of the available cores; without it, instructions from a single process would only be able to execute on a single core at a time.

“Tasks which are executing on distinct processors at any point in time are said to be running in parallel. It may also be possible to execute several tasks on a single processor. Over a period of time, the impression is given that they are running in parallel, when in fact, at any point in time, only one task has control of the processor. In this case, we say that the tasks are being performed concurrently, that is, their execution is being shared by the same processor.” — Practical Parallel Rendering, Chalmers et al, A K Peters, 2002.

In .NET, we can do parallel processing by using multithreading, or an abstraction thereof, such as the Task Parallel Library. Parallel processing is CPU-bound.

Parallel processing is often contrasted with distributed processing, where the computing resources are not physically tightly coupled. This is not, however, relevant to asynchronous programming, so we will not delve into it.

Operations that take a long time to execute will typically hold control of the thread in which they are running, in what we call a blocking operation. However, if these operations involve waiting for I/O to occur (e.g. waiting for results from a file, network or database), then the I/O could occur in a non-blocking fashion without holding the thread at all during the waiting time. We say that the I/O operation is asynchronous: the thread that is waiting for it does not actually wait, but may be reassigned to do other work until the I/O operation is complete.

Asynchronous non-blocking I/O is not enabled by multithreading. In fact, Stephen Cleary goes into detail about how this works in his excellent post, “There Is No Thread“. In brief, a mechanism known as I/O Completion Ports (IOCP) is used to notify a thread that its I/O request is ready; but that thread does not need to block (or indeed run at all) during the waiting time. This is what we enable when we do an asynchronous wait by means of the await keyword.

In order to write efficient code, it is fundamental to understand the nature of what the code is doing. Parallel CPU-based execution involves significant overheads in thread synchronization. It makes no sense to use Parallel.ForEach() for I/O-bound tasks, and many are also surprised to find that executing CPU-based tasks sequentially is often faster than doing them in parallel, especially when such tasks are fine-grained and do very trivial work. In such cases, the synchronization overheads dwarf the cost of executing that code directly on the CPU on a single thread.

See also: “Asynchronous and Concurrent Processing in Akka .NET Actors“, which has a section on Asynchronous vs Concurrent Processing using simple tasks.

The Dangers of Blocking with Asynchronous Code

Asynchronous programming has two main benefits: scalability and offloading. If you block, then you are hogging resources (i.e. threads) that could be better used elsewhere. In an ASP .NET context, this means that a thread cannot service other requests (hurts scalability). In a GUI context, it means that the UI thread cannot be used for rendering because it is busy waiting for a long-running operation (so the work should be offloaded).

There are several methods which are part of the Task API which block, such as Wait(), WaitAny() and WaitAll(). The Result property also has the effect of blocking until the task is complete. Stephen Cleary has a table in his Async and Await intro showing these blocking API calls and how to turn them into asynchronous calls. Best practice is to await the asynchronous equivalent of the blocking method

However, simply wasting threading resources is not the only problem with blocking. It is actually very easy to end up with a deadlock and stall your application. Stephen Cleary has an excellent explanation of how this happens, with two concise examples based on GUI applications (e.g. WPF or Windows Forms) and ASP .NET. I am only going to attempt to simplify the scenario and illustrate it with a diagram.

Consider the following code in a WPF application’s codebehind (MainWindow.xaml.cs):

        private Uri uri = new Uri("http://ip.jsontest.com/");

        public async Task WaitABit()
        {
            await Task.Delay(3000);
        }

        private void Button_Click(object sender, RoutedEventArgs e)
        {
            var task = WaitABit();
            task.Wait();
        }

This results in deadlock. Let’s see why:

  1. Button_Click() calls WaitABit() without awaiting it. This starts a task in fire-and-forget mode (so far).
  2. WaitABit() calls Task.Delay() and awaits its result asynchronously. The thread does not block, since the execution has gone into “I/O mode” (technically a delay isn’t really I/O, but the idea is the same).
  3. In the meantime, Button_Click() resumes execution, and calls Wait(), effectively blocking until the result of WaitABit() is ready.
  4. The delay completes.
  5. WaitABit() should resume, but the UI thread that it’s supposed to run on is already blocked.
  6. The deadlock occurs because the continuation of WaitABit() needs to run on the UI thread, but the UI thread is blocked waiting for the result of WaitABit().

Note: I intentionally haven’t simplified WaitABit() such that it just returns the delay rather than doing a single-line async/await, as it would not deadlock. Can you guess why?

This example has shown blocking of the UI thread, but the concept stretches beyond GUI applications. In order to fully understand what is happening, we need to understand what a SynchronizationContext is. In short, it’s an abstraction of a threading model within an application. A GUI application needs a single UI thread for rendering and updating GUI components (although you can use other threads, they cannot touch the GUI directly). An ASP .NET application, on the other hand, handles requests using the thread pool. The SynchronizationContext is the abstraction that allows us to use the same multithreaded and asynchronous programming models across applications with fundamentally different internal threading models.

As a result, GUI applications (e.g. Windows Forms and WPF) and ASP .NET applications (those targeting the full .NET Framework) can deadlock. ASP .NET Core applications don’t have a SynchronizationContext, so they will not deadlock. Console applications won’t normally deadlock because the task continuation can just execute on another thread.

The deadlock occurs because the SynchronizationContext (e.g. the UI thread in the above example) is captured and used for the task continuation. However, we can prevent this from happening by using ConfigureAwait(false):

        public async Task WaitABit()
        {
            await Task.Delay(3000).ConfigureAwait(false);
        }

The GUI application does not deadlock now, because a thread pool can be picked to execute the continuation. Once WaitABit() completes, then the blocking Wait() in Button_Click() can resume on the same UI thread where it started. Any modifications to UI elements would work fine.

While ConfigureAwait(false) is no replacement for doing async/await all the way, it does have its benefits. Capturing the SynchronizationContext incurs a performance penalty, so if it is not actually necessary, library code should avoid it by using ConfigureAwait(false). Also, if application code must block on an asynchronous call for legacy reasons, then ConfigureAwait(false) would avoid the resulting deadlocks.

Of course, the real fix for our deadlock example here is really as easy as this (without ConfigureAwait(false)):

        private async void Button_Click(object sender, RoutedEventArgs e)
        {
            var task = WaitABit();
            await task;
        }

I’ve just replaced the blocking Wait() call with an await, and marked the method as async as a result.

Asynchronous Wrappers for Synchronous Methods

A lot of library APIs with asynchronous methods have pairs of synchronous and asynchronous methods, e.g. Read() and ReadAsync(), Write() and WriteAsync(), etc. If your library API is purely synchronous, then you should not expose asynchronous wrappers for the synchronous methods (e.g. by using Task.Run()).

Stephen Toub goes into detail on why this is a bad idea in the article linked above, but I think the following paragraph summarises it best:

“If a developer needs to achieve better scalability, they can use any async APIs exposed, and they don’t have to pay additional overhead for invoking a faux async API. If a developer needs to achieve responsiveness or parallelism with synchronous APIs, they can simply wrap the invocation with a method like Task.Run.” — Stephen Toub, “Should I expose asynchronous wrappers for synchronous methods?

Asynchronous Properties

You can’t use async/await in properties. You can use them indirectly via an asynchronous method, but it’s a rather weird thing to do.

“[You can’t use await i]nside of a property getter or setter. Properties are meant to return to the caller quickly, and thus are not expected to need asynchrony, which is geared for potentially long-running operations. If you must use asynchrony in your properties, you can do so by implementing an async method which is then used from your property.” — Stephen Toub, Async/Await FAQ

async void methods

async void methods should only be used for top-level event handlers. In “The Dangers of async void Event Handlers“, I explain the general dangers of async void methods (mainly related to not having a task that you can await), but I also demonstrate and solve the additional problem of async void event handlers interleaving while they run (which is problematic if you’re expecting to handle events in sequence, such as with a message queue).

    1. “There is no way for the caller to await completion of the method.
    2. “As a result of this, async void calls are fire-and-forget.
    3. “Thus it follows that async void methods (including event handlers) will execute in parallel if called in a loop.
    4. “Exceptions can cause the application to crash (see the aforementioned article by Stephen Cleary for more on this).”

— Daniel D’Agostino, The Dangers of async void Event Handlers

Summary

  1. Parallel is CPU-based. Asynchronous is I/O-based. Don’t mix the two. (Running asynchronous I/O tasks “in parallel” is OK, as long as you’re doing it following the proper patterns rather than using something like Parallel.Foreach().)
  2. Asynchronous I/O does not use any threads.
  3. Blocking affects scalability and can hold higher-priority resources (such as a UI thread).
  4. Blocking can also result in deadlocks. Prevent them by using async/await all the way if you can. ConfigureAwait(false) is useful in library code both for performance reasons and to prevent deadlocks resulting from application code that must block for legacy reasons.
  5. For asynchronous libraries, don’t expose synchronous wrappers. There is an overhead associated with it, and the client can decide whether it’s worth doing from their end.
  6. You can’t have asynchronous properties, except indirectly via asynchronous methods. Avoid this.
  7. async void is for event handlers. Even so, if ordering is important, beware of interleaving.

Patterns for Asynchronous Composite Tasks in C#

In the previous two articles, I’ve explained why and how to use async/await for asynchronous programming in C#.

Now, we will turn our attention to more interesting things that we can do when combining multiple tasks within the same method.

Update 22nd September 2018: Another pattern not covered here is fire-and-forget. There are many ways to achieve this, including simply not awaiting (causes warnings – see ways to ignore them), using Task.Run(), using Task.Factory.StartNew(), or async void (not recommended, see “Common Mistakes in Asynchronous Programming with .NET“. This is suitable when you want to trigger some kind of processing but don’t care whether/when it completes. It doesn’t really fit the fast food scenario used in this article — placing an order without ever being notified of its completion/failure is sure to annoy customers. Which I suppose is also why applying for jobs is such a pain in the ass for many people.

Fast Food Example

In order to see each pattern at work, we need a simple example involving multiple tasks. Imagine you walk into your favourite fast food restaurant, and order a meal involving a burger, fries and a drink. Each of these takes a different amount of time to prepare, and the total time of the order may vary depending on how the execution of these three tasks takes place.

Sequential Tasks

The simplest approach is to just execute tasks one after another, waiting for one to finish before starting the next.

        static void Main(string[] args)
        {
            OrderAsync();
            Console.ReadLine();
        }

        static async void OrderAsync()
        {
            var stopwatch = Stopwatch.StartNew();

            await Task.Delay(3000)
                .ContinueWith(task => ShowCompletion("Fries", stopwatch.Elapsed));
            await Task.Delay(1000)
                .ContinueWith(task => ShowCompletion("Drink", stopwatch.Elapsed));
            await Task.Delay(5000)
                .ContinueWith(task => ShowCompletion("Burger", stopwatch.Elapsed));

            ShowCompletion("Order", stopwatch.Elapsed);

            stopwatch.Stop();
        }

        static void ShowCompletion(string name, TimeSpan time)
        {
            Console.WriteLine($"{name} completed after {time}");
        }

In this example code, we are representing the fries, drink and burger tasks as delays of different length. The rest of the code is purely diagnostic in order to allow us to get some output and understand the results. There is also a workaround allowing us to use asynchronous code in Main(), that was described in the previous article.

Here is the output from the above:

Fries completed after 00:00:03.0359621
Drink completed after 00:00:04.0408785
Burger completed after 00:00:09.0426927
Order completed after 00:00:09.0434057

Because we performed each task sequentially, the total order took 9 seconds. In a fast food restaurant, it probably does not make sense to wait for the fries to be ready before preparing the drink, and to wait for both to be ready before starting to prepare the burger. These could be done in parallel, as we will see in the next sections.

However, there are many legitimate cases where sequential task execution makes sense. We’ve seen one in “Motivation for async/await in C#“:

private async void Button_Click(object sender, RoutedEventArgs e)
{
    var baseAddress = new Uri("http://mta.com.mt");
 
    using (var httpClient = new HttpClient() { BaseAddress = baseAddress })
    {
        var response = await httpClient.GetAsync("/");
        var content = await response.Content.ReadAsStringAsync();
 
        MessageBox.Show("Response arrived!", "Slow website");
    }
}

In this case, the tasks are dependent on each other. In order to get the content of the response, the response itself must first finish executing. Because there is this dependency, the tasks must be executed one after the other.

Parallel Tasks, All Must Finish

If we fire off the tasks without awaiting them right away, there are more interesting things we can do with them. Essentially, by removing await, we are running the tasks in parallel.

        static async void OrderAsync()
        {
            var stopwatch = Stopwatch.StartNew();

            var friesTask = Task.Delay(3000)
                .ContinueWith(task => ShowCompletion("Fries", stopwatch.Elapsed));
            var drinkTask = Task.Delay(1000)
                .ContinueWith(task => ShowCompletion("Drink", stopwatch.Elapsed));
            var burgerTask = Task.Delay(5000)
                .ContinueWith(task => ShowCompletion("Burger", stopwatch.Elapsed));

            await Task.WhenAll(friesTask, drinkTask, burgerTask);

            ShowCompletion("Order", stopwatch.Elapsed);

            stopwatch.Stop();
        }

Aside from removing await before each task, we are assigning them to variables so that we can keep track of them. We then rely on Task.WhenAll() to wait until all tasks have completed (as an analogy, think of it as a memory barrier). Task.WhenAll() is awaitable, unlike its blocking cousin Task.WaitAll(). This gives us a way to easily run asynchronous tasks in parallel where it makes sense to do so.

And in a fast food restaurant, preparing fries and drink while the burger is cooking makes a lot of sense. In fact, the order is ready after just 5 seconds, which is the time of the longest task (the burger). Because the fries and drink were prepared concurrently with the burger, they did not add anything to the total time of the order.

Drink completed after 00:00:01.1696855
Fries completed after 00:00:03.0363008
Burger completed after 00:00:05.0443482
Order completed after 00:00:05.0445130

Note that Task.WhenAll() takes an IEnumerable<Task>, and as such, you can easily pass it a list of tasks (e.g. when the number of tasks is dynamic based on input or data).

Parallel Tasks, First To Finish

If you’re hungry and thirsty after an unexpected trip in the desert, it’s unlikely that you’re going to want to wait for all items to finish before starting to eat and drink. Instead, you’ll consume each item as soon as it arrives.

        static async void OrderAsync()
        {
            var stopwatch = Stopwatch.StartNew();

            var friesTask = Task.Delay(3000)
                .ContinueWith(task => ShowCompletion("Fries", stopwatch.Elapsed));
            var drinkTask = Task.Delay(1000)
                .ContinueWith(task => ShowCompletion("Drink", stopwatch.Elapsed));
            var burgerTask = Task.Delay(5000)
                .ContinueWith(task => ShowCompletion("Burger", stopwatch.Elapsed));

            await Task.WhenAny(friesTask, drinkTask, burgerTask);

            ShowCompletion("Order", stopwatch.Elapsed);

            stopwatch.Stop();
        }

Task.WhenAny() will wait until the first task has completed, and then resume execution of the method. It also returns the task that completed (though we’re not using that here).

Drink completed after 00:00:01.0390588
Order completed after 00:00:01.0412190
Fries completed after 00:00:01.0413729
Burger completed after 00:00:01.0413729

Our results are a little messed up. Since Task.WhenAny() only waits for the first task to complete, the entire order was considered complete as soon as the drink was ready. The stopwatch was subsequently stopped, and the output shows 1 second for everything even though the fries and burger actually took longer.

This scenario is useful when you want to retrieve data from different sources and just use the result that arrived fastest. It is not very intuitive for when you’re dying of hunger and want to gobble up everything as it arrives. We’ll address this in the next section.

Parallel Tasks, All Must Finish, Process As They Arrive

So here’s the scenario: we’re famished, and we want to consume our drink, fries and burger as they are ready. We want to consume all of them, but Task.WhenAny() only gives us the first task that completed.

It’s easy to reuse Task.WhenAny() to wait for all tasks to complete, by using a simple loop.

        static async void OrderAsync()
        {
            var stopwatch = Stopwatch.StartNew();

            var friesTask = Task.Delay(3000)
                .ContinueWith(task => ShowCompletion("Fries", stopwatch.Elapsed));
            var drinkTask = Task.Delay(1000)
                .ContinueWith(task => ShowCompletion("Drink", stopwatch.Elapsed));
            var burgerTask = Task.Delay(5000)
                .ContinueWith(task => ShowCompletion("Burger", stopwatch.Elapsed));

            var tasks = new List<Task>() { friesTask, drinkTask, burgerTask };
            
            while (tasks.Count > 0)
            {
                var task = await Task.WhenAny(tasks);
                tasks.Remove(task);

                Console.WriteLine($"Yum! {tasks.Count} left!");
            }

            ShowCompletion("Order", stopwatch.Elapsed);

            stopwatch.Stop();
        }

We’re putting all tasks in a list, and as each task completes, we remove it from the list. We know we’re done when there’s nothing left in the list.

Drink completed after 00:00:01.0506610
Yum! 2 left!
Fries completed after 00:00:03.0328112
Yum! 1 left!
Burger completed after 00:00:05.0317576
Yum! 0 left!
Order completed after 00:00:05.0331167

From this example, it might appear that there’s no benefit from using this approach when compared to just using continuations on tasks and using Task.WhenAll(). However, in real scenarios that don’t involve french fries, it is often reasonable to check the result of each task for failure. If one of the tasks fails, then the operation is aborted without having to wait for all the other tasks to complete.

Task With Timeout

As it turns out, we’re so hungry that we’re only willing to wait up to 4 seconds for each item, since the start of the order. If they take longer than 4 seconds, we’ll cancel that part of the order.

Fortunately, there’s an excellent blog post on the Parallel Programming MSDN blog from 2011 that shows how to write a TimeoutAfter() method that does exactly this. I’ll go ahead and steal it:

    public static class TaskExtensions
    {
        public static async Task TimeoutAfter(this Task task, int millisecondsTimeout)
        {
            if (task == await Task.WhenAny(task, Task.Delay(millisecondsTimeout)))
                await task;
            else
                throw new TimeoutException();
        }
    }

It’s an extension method, so we can easily use it with the tasks we already have:

        static async void OrderAsync()
        {
            var stopwatch = Stopwatch.StartNew();

            var friesTask = Task.Delay(3000).TimeoutAfter(4000)
                .ContinueWith(task => ShowCompletion("Fries", stopwatch.Elapsed));
            var drinkTask = Task.Delay(1000).TimeoutAfter(4000)
                .ContinueWith(task => ShowCompletion("Drink", stopwatch.Elapsed));
            var burgerTask = Task.Delay(5000).TimeoutAfter(4000)
                .ContinueWith(task => ShowCompletion("Burger", stopwatch.Elapsed));

            var tasks = new List<Task>() { friesTask, drinkTask, burgerTask };
            
            while (tasks.Count > 0)
            {
                var task = await Task.WhenAny(tasks);
                tasks.Remove(task);

                Console.WriteLine($"Yum! {tasks.Count} left!");
            }

            ShowCompletion("Order", stopwatch.Elapsed);

            stopwatch.Stop();
        }

Running this, the burger task will timeout and an exception will be thrown. Since we’re not actually checking for this, all we see in the output is that the burger task finished after 4 seconds instead of 5.

Drink completed after 00:00:01.0819761
Yum! 2 left!
Fries completed after 00:00:03.0493526
Yum! 1 left!
Burger completed after 00:00:04.0952924
Yum! 0 left!
Order completed after 00:00:04.0974441

By putting a breakpoint or turning on first chance exceptions, though, we see that the TimeoutException was indeed thrown:

Summary

  1. awaiting tasks one after another will result in sequential execution.
  2. Use Task.WhenAll() to wait for all tasks to complete before proceeding.
  3. Use Task.WhenAny() to get the first task that finished, and proceed before waiting for the others.
  4. Use Task.WhenAny() in a loop to process all tasks as they arrive, and potentially break out early in case of failure.
  5. Apply a timeout to a task using the TimeoutAfter() extension method from the Parallel Programming blog on MSDN.

Working with Asynchronous Methods in C#

In yesterday’s article, “Motivation for async/await in C#“, we have seen why asynchronous programming is important. We have also seen basic usage of the await keyword, which requires its containing method to be marked as async.

When learning to write asynchronous methods, it is not trivial to get the interactions between various methods (which may or may not be asynchronous) right. In fact, the examples in yesterday’s article which use an async void method should normally only be used in event handlers, and even so, there are caveats to consider.

In this article, we’ll go through various different scenarios in which async/await can be used.

async Task methods

Let’s take another look at the asynchronous (event handler) method from yesterday’s article:

        private async void Button_Click(object sender, RoutedEventArgs e)
        {
            var baseAddress = new Uri("http://mta.com.mt");

            using (var httpClient = new HttpClient() { BaseAddress = baseAddress })
            {
                var response = await httpClient.GetAsync("/");
                var content = await response.Content.ReadAsStringAsync();

                MessageBox.Show("Response arrived!", "Slow website");
            }
        }

Try moving out the code into a separate method, and awaiting it from the event handler. You’ll find that you can’t await an async void method:

In order to be able to await a method, it must return Task (if it doesn’t need to return anything, such as void methods) or Task<T> (if it needs to return a value of type T). We also append an –Async suffix to the method name by convention to make it obvious for people who use such methods that they’re meant to be awaited.

Thus, this example becomes:

        private async void Button_Click(object sender, RoutedEventArgs e)
        {
            await GetHtmlAsync();
        }

        private async Task GetHtmlAsync()
        {
            var baseAddress = new Uri("http://mta.com.mt");

            using (var httpClient = new HttpClient() { BaseAddress = baseAddress })
            {
                var response = await httpClient.GetAsync("/");
                var content = await response.Content.ReadAsStringAsync();

                MessageBox.Show("Response arrived!", "Slow website");
            }
        }

This is an example of an async Task method, which does not return anything. Let’s change it such that it returns the HTML from the response:

        private async void Button_Click(object sender, RoutedEventArgs e)
        {
            string html = await GetHtmlAsync();
        }

        private async Task<string> GetHtmlAsync()
        {
            var baseAddress = new Uri("http://mta.com.mt");

            using (var httpClient = new HttpClient() { BaseAddress = baseAddress })
            {
                var response = await httpClient.GetAsync("/");
                var content = await response.Content.ReadAsStringAsync();

                return content;
            }
        }

We’ve changed the signature of GetHtmlAsync() to return Task<string> intead of just Task. Correspondingly, we are now returning content (a string) from the method. At the event handler, we are now assigning the result of the await into the html variable. Apart from waiting asynchronously until the method completes, await has the additional function of unwrapping the result from the Task that contains it; thus html is of type string.

If you try removing the async keyword from GetHtmlAsync(), you’ll learn a little more about the actual function of the async keyword:

Without async, you are expected to return what the method advertises: a Task<string>. On the other hand, if you mark the method as async, the meaning of the method is changed such that you can return a string directly. The underlying Task-related plumbing is handled by the compiler.

Chaining Asynchronous Methods

In the previous section, we have seen how methods need to return a Task in order to be awaited. Typically, one async Task method will call another and await its result.

The chain of calls ends at a last async Task, typically provided by the .NET Framework or other library, which interfaces directly with I/O (e.g. network or filesystem). It must be an asynchronous method; attempting to disguise a blocking call as async here will lead to deadlocks.

async Task may (and should) be used all the way from an incoming request to the final I/O library method in application types that support top-level asynchronous methods, such as Web API or WCF.

The situation is a little different for other applications (e.g. Windows Forms, WPF) that are event-driven. Asynchronous event handlers are a special case where we need to use async void methods, as we have already seen in the WPF example from yesterday’s article:

async void methods

Event handlers are void methods that are called by the runtime in a dispatcher loop. Thus, async void methods are necessary to allow usage of await within event handlers without requiring them to return a Task

However, as we have seen before, there is no way to await an async void method. This makes async void methods very dangerous to use outside of their intended context, as I have detailed in “The Dangers of async void Event Handlers“. This is one of the more common mistakes when programming with async/await, and it is good to become familiar with the problems in order to avoid repeating the same mistakes in future.

Fake Asynchronous Methods

Sometimes, you’ll have an interface that requires asynchronous methods, yet your implementation does not need anything asynchronous in it. Let’s look at a practical example:

    public interface ISimpleStorage
    {
        Task WriteAsync(string str);
        Task<string> ReadAsync();
    }

You could implement this interface using a simple file as a backing store, in which case your methods will be suitably asynchronous:

    public class FileStorage : ISimpleStorage
    {
        public async Task<string> ReadAsync()
        {
            using (var fs = File.OpenRead("file.txt"))
            using (var sr = new StreamReader(fs))
            {
                var str = await sr.ReadToEndAsync();
                return str;
            }
        }

        public async Task WriteAsync(string str)
        {
            using (var fs = File.OpenWrite("file.txt"))
            using (var sw = new StreamWriter(fs))
            {
                await sw.WriteAsync(str);
                await sw.FlushAsync();
            }
        }
    }

However, you could have another implementation which just uses memory as storage, and in this case there’s nothing asynchronous:

    public class MemoryStorage : ISimpleStorage
    {
        private string str;

        public Task<string> ReadAsync()
        {
            return Task.FromResult(str);
        }

        public Task WriteAsync(string str)
        {
            this.str = str;
            return Task.CompletedTask;
        }
    }

In that case, your methods need not be marked async. However, this means that you will actually need to return Task instances from each method. If you have nothing to return, then just return a Task.CompletedTask (available from .NET Framework 4.6 onwards). You can also use Task.FromResult() to construct a task from a variable that you want to return.

Simplifying Single Line Asynchronous Methods

Consider the following asynchronous method:

        static async Task WaitALittleAsync()
        {
            await Task.Delay(10000);
        }

Here, we are waiting for the delay to finish, and then returning the result.

Instead, we can just return the Task itself, and let the caller do the awaiting:

        static Task WaitALittleAsync()
        {
            return Task.Delay(10000);
        }

Once again, a method does not need to be marked async if (a) it does not await anything, and (b) it can return a Task, rather than some other type that needs to be wrapped in a Task.

Using async/await in Main()

Until recently, you couldn’t use async/await in a console application’s Main() method. You can have an async Task Main() method as from C# 7.1, but you need to make sure you’re using C# 7.1 first from your project properties -> Build -> Advanced…:

You can choose either “C# 7.1”, or “C# latest minor version (latest)”. With that, you can await directly from within Main():

        static async Task Main(string[] args)
        {
            await WaitALittleAsync();
        }

Before C# 7.1, you had to use some other workaround to await from Main(). One option is to use a special AsyncContext such as the one written by Stephen Cleary. Another is to just move asynchrony out of Main() and use a Console.ReadLine() to keep the window open:

        static void Main(string[] args)
        {
            Run();
            Console.ReadLine();
        }

        static async void Run()
        {
            await WaitALittleAsync();
        }

Summary

  1. Use async Task methods wherever you can.
  2. Use async void methods for event handlers, but beware the dangers even there.
  3. Use the –Async suffix in asynchronous method names to make them easily recognisable as awaitable.
  4. Use Task.CompletedTask and Task.FromResult() to satisfy asynchronous contracts with synchronous code.
  5. Asynchronous code in Main() has not been possible until recently. Use C# 7.1+ or a workaround to get this working.

Motivation for async/await in C#

Introduction

In .NET 4.5, the async and await keywords were introduced. They are now a welcome and ubiquitous part of the .NET developer’s toolkit. However, I often run into developers who are not familiar with this concept, and I have had to explain async/await many times. Thus, I felt it would make sense to write up this explanation so that it can be accessible to all.

async/await is really just syntactic sugar for task continuations, but it makes asynchronous, I/O-bound code very elegant to work with. In fact, it is so popular that it has been spreading to other languages such as JavaScript.

As far as I can remember, async/await was introduced mainly to facilitate building Windows Phone applications. Unlike desktop applications, Windows Phone had introduced a hard limit in which no method could block for more than 50ms. Thus it was no longer possible for developers to resort to blocking their UI, and the need arose to use asynchronous paradigms, for which tasks are a very powerful, but sometimes tedious, option. async/await makes asynchronous code look almost identical to synchronous code.

In this article, I will not go into detail about using Tasks in .NET (which deserves a whole series of articles in itself), but I will give a very basic explanation on how to use them for asynchronous processing. More advanced usage is left for future articles.

Sidenote: Rejoice, for this is the 200th article published on Gigi Labs!

Motivation

Although async/await can be used in all sorts of applications, I like to use WPF applications to show why it’s important. You don’t need to know WPF for this; just follow along.

Let’s create a new WPF application, and simply drag a button from the Toolbox onto the WPF window:

Double-clicking that button will cause a click event handler to be generated in the codebehind class (MainWindow.xaml.cs):

        private void Button_Click(object sender, RoutedEventArgs e)
        {

        }

Now, let’s be evil and toss a Thread.Sleep() in there:

        private void Button_Click(object sender, RoutedEventArgs e)
        {
            Thread.Sleep(15000);
        }

Run the application without debugging (Ctrl+F5) and click the button. Try to interact with the window (e.g. drag it around). What happens?

The UI is completely frozen; you can’t click the button again, drag the window, or close it. That’s because we’ve done something very, very stupid. We have blocked the application’s UI thread.

Let us now replace the Thread.Sleep() with this Task.Delay() instead (notice there is also that async in the method signature, and it’s important):

        private async void Button_Click(object sender, RoutedEventArgs e)
        {
            await Task.Delay(15000);
        }

If you run the application now, you’ll find that you can interact with the window after clicking the button, even though the code appears to be doing practically the same thing as before. So what changed?


Image taken from: Asynchronous Programming with Async and Await (C# and Visual Basic)

The way control flow works with async/await is explained in detail on MSDN, but it may feel a little overwhelming if you’re learning this the first time. Essentially, to understand what is happening, we need to break that await Task.Delay(15000) line into the following steps:

  1. Task.Delay(15000) executes, returning a Task representing it. It does not block since it does not necessarily execute on the same thread.
  2. Because of the await, execution of the remainder of the method is temporarily suspended. In terms of method execution, it’s as if we’re blocking.
  3. Eventually, the Task finishes executing. The await part is fulfilled, and the remainder of the method can resume execution.

async/await with HttpClient

Using a simple delay is a nice way to get an initial feel of async/await, but it is also not very realistic. async/await works best when you are dealing with I/O requests such as sockets, databases, NoSQL storage, files, REST, etc. To this effect, let’s adapt our example to use an HttpClient.

First, let’s install the following package via NuGet:

Install-Package Microsoft.Net.Http

We’re going to repeat the same experiment as before: first we’ll do a silly blocking implementation, and then we’ll make it asynchronous.

Let’s work with this code that gets the response from the Google homepage:

        private void Button_Click(object sender, RoutedEventArgs e)
        {
            var baseAddress = new Uri("http://www.google.com");

            using (var httpClient = new HttpClient() { BaseAddress = baseAddress })
            {
                var response = httpClient.GetAsync("/").Result;
                var content = response.Content.ReadAsStringAsync().Result;
            }
        }

If we run this and click the button, the window blocks for a fraction of a second, and we get back the HTML from Google:

Google has a very fast response time, so it is a poor example for visualising the problems of blocking code in a UI. Instead, let’s use a website that takes a very long time to load. One that I’ve written about before is that of the Malta Tourism Authority, which takes over 20 seconds to load. Let’s change our base address in the code:

            var baseAddress = new Uri("http://mta.com.mt");

If you run the application now and click the button, you’ll see that it will be stuck for a fairly long time, just like before. We can sort this out by making the request asynchronous. To do that, we replace each .Result property with an await. In order to use await, we have to mark the method as async. In order to get a notification when the response actually comes back, I’ll also toss in a message box:

        private async void Button_Click(object sender, RoutedEventArgs e)
        {
            var baseAddress = new Uri("http://mta.com.mt");

            using (var httpClient = new HttpClient() { BaseAddress = baseAddress })
            {
                var response = await httpClient.GetAsync("/");
                var content = await response.Content.ReadAsStringAsync();

                MessageBox.Show("Response arrived!", "Slow website");
            }
        }

Run the application and click the button. You can interact with the window while the request is processed, and when the response comes back, you’ll get a notification:

If you put a breakpoint and follow along line by line, you’ll see that everything happens sequentially: the content line will not execute before the response has arrived. So async/await is really just giving us the means to make asynchronous code look very similar to normal sequential execution logic, hiding the underlying complexities. However, as we’ll see in later articles, it is also possible to use this abstraction for more powerful scenarios such as parallel task execution.

The Importance of Asynchronous Code

While I have not really explained how to work with async/await in any reasonable depth yet, the importance should now be evident: making I/O calls asynchronous means that your thread does not need to block, and can be freed to do other work. While this has a great impact on user experience in GUI applications, it is also fundamental in other applications such as APIs. Under high load, it is possible to end up with thread starvation when blocking, because all threads are stuck waiting for I/O, when they could otherwise be processing requests.

For pure asynchronous code, blocking isn’t actually ever necessary. When an I/O request such as HttpClient.GetAsync() is fired, .NET uses something called I/O Completion Ports (IOCP) which can monitor incoming I/O and trigger the caller to resume when ready.

We have merely scratched the surface here. There is a lot to be said about async/await, and likewise there are a lot of pitfalls that one must be made aware of. This article has shown why asynchronous code is important, and future articles may cover different aspects in more detail.

The Adapter Design Pattern for DTOs in C#

In this article, we discuss the Adapter design pattern, which is part of the book “Design Patterns: Elements of Reusable Object-Oriented Software” by Gamma et al. (also known as the Gang of Four).

We will discuss the motivation for this design pattern (including common misconceptions) and see a number of different ways in which this pattern is implemented in C#.

TL;DR Use extension methods for your mapping code.

Defining the Interface of a Class

The above book summaries the Adapter pattern thusly:

“Convert the interface of a class into another interface clients expect. Adapter lets classes work together that couldn’t otherwise because of incompatible interfaces.”

It is fundamental to interpret this description in the context in which it was written. The book was published in 1994, before Java and similar languages even existed (in fact, the examples are in C++). Back then, “the interface of a class” simply meant its public API, to which extent “client code” could use it. C++ has no formal interface construct as in C# and Java, and the degree of encapsulation is dictated as a result of what the class exposes publicly. Note that a class’s interface is not restricted to its public methods alone; C++ also offers other devices external to the class itself, and so does C# (extension methods, for instance).

Given today’s languages where interfaces are formal language constructs, such an interpretation has mostly been forgotten. For instance, the 2013 MSDN Magazine article “The Adapter Pattern in the .NET Framework” illustrates the Adapter design pattern in terms of C# interfaces, but it slightly misses the point. We can discuss the Adapter pattern without referring to C# interfaces at all, and instead focus on Data Transfer Objects (DTOs). DTOs are simply lightweight classes used to carry data, often between remote applications.

An Example Scenario

Let’s say we have this third party class with a single method:

    public class PersonRepository
    {
        public void AddPerson(Person person)
        {
            // implementation goes here
        }
    }

The interface of this class consists of the single AddPerson() method, but the Person class that it expects as a parameter is also part of that interface. There is no way that the third party could give us the PersonRepository (e.g. via a NuGet package) without also including the Person class.

It is often the case that we may have a different Person class of our own (we’ll call it OurPerson), but we can’t pass it to AddPerson(), because it expects its own Person class:

    public class Person // third party class
    {
        public string FullName { get; set; }

        public Person(string fullName)
        {
            this.FullName = fullName;
        }
    }

    public class OurPerson // our class
    {
        public string FirstName { get; set; }
        public string LastName { get; set; }

        public OurPerson(string firstName, string lastName)
        {
            FirstName = firstName;
            LastName = lastName;
        }
    }

Thus, we need a way to transform an OurPerson instance into a Person. For that, we need an Adapter. Next, we’ll go through a few ways in which we can implement this.

Constructor Adapter

One way of creating a Person from an OurPerson is to add a constructor in Person which handles the conversion:

    public class Person // third party class
    {
        public string FullName { get; set; }

        public Person(string fullName)
        {
            this.FullName = fullName;
        }

        public Person(OurPerson ourPerson)
        {
            this.FullName = $"{ourPerson.FirstName} {ourPerson.LastName}";
        }
    }

It is not hard to see why this is a really bad idea. It forces Person to have a direct dependency on OurPerson, so anyone shipping PersonRepository would now need to ship an additional class that may be domain-specific and might not belong in this context. Even worse, this is not possible to achieve if the Person class belongs to a third party and we are not able to modify it.

Wrapper

Another approach (which the aforementioned book describes) is to implement OurPerson in terms of Person. This can be done by subclassing, if the third party class allows it:

    public class OurPerson : Person // our class
    {
        public string FirstName { get; set; }
        public string LastName { get; set; }

        public OurPerson(string firstName, string lastName)
            : base($"{firstName} {lastName}")
        {
            FirstName = firstName;
            LastName = lastName;
        }
    }

Where C# interfaces are involved, an alternative approach to inheritance is composition. OurPerson could contain a Person instance and expose the necessary methods and properties to implement its interface.

The disadvantage of either of these two approaches is that they make OurPerson dependent on Person, which is the opposite of the problem we have seen in the previous approach. This dependency would be carried to wherever OurPerson is used.

Especially when dealing with third party libraries, it is usually best to map their data onto our own objects. Any changes to the third party classes will thus have limited impact in our domain.

AutoMapper

A lot of people love to use AutoMapper for mapping DTOs. It is particularly useful if the source and destination classes are practically identical in terms of properties. If they’re not, you’ll have to do a fair amount of configuration to tell AutoMapper how to construct the destination properties from the data in the source.

Personally, I’m not a fan of AutoMapper, for the following reasons:

  • The mapping occurs dynamically at runtime. Thus, as the DTOs evolve, you will potentially have runtime errors in production. I prefer to catch such issues at compile-time.
  • Writing AutoMapper configuration can get very tedious and unreadable, often more so than it would take to map DTOs’ properties manually.
  • You can’t do anything asynchronous in AutoMapper configuration. While this may sound bizarre, I’ve needed this in the past due to terrible DTOs provided by a third party provider.

Standalone Adapter

The aforementioned MSDN Magazine article simply uses a separate class to convert from source to destination. Applied to our example, this could look like this:

    public class PersonAdapter
    {
        public Person ConvertToPerson(OurPerson person)
        {
            return new Person($"{person.FirstName} {person.LastName}");
        }
    }

We may then imagine its usage as such:

            var ourPerson = new OurPerson("Chuck", "Berry");
            var adapter = new PersonAdapter();
            var person = adapter.ConvertToPerson(ourPerson);

This approach is valid, and does not couple the DTOs together, but it is a little tedious in that you may have to create a lot of adapter classes, and subsequently create instances whenever you want to use them. You can mitigate this a little by making them static.

Extension Methods

A slight but very elegant improvement over the previous approach is to put mapping code into extension methods.

    public static class OurPersonExtensions
    {
        public static Person ToPerson(this OurPerson person)
        {
            return new Person($"{person.FirstName} {person.LastName}");
        }
    }

The usage is very clean, making it look like the conversion operation is part of the interface of the class:

            var ourPerson = new OurPerson("Chuck", "Berry");
            var person = ourPerson.ToPerson();

This works just as well if you’ve converting from a third party object onto your own class, since extension methods can be used to append functionality even to classes over which you have no control.

Summary

There are many ways to apply the Adapter design pattern in mapping DTOs. Extension methods are the best approach I’ve come across for C# because:

  • They don’t couple the source and destination DTOs.
  • The conversion occurs at compile time, so any repercussions of changes to the DTOs will be caught early.
  • They can easily be used to append functionality even to third party classes for which the source code is not available.

The Adapter design pattern has little to do with interfaces as a formal OOP language construct. Instead, it deals with “the interface of a class”, which is embodied by whatever it exposes publicly. The Adapter design pattern provides a means to work with that interface by converting incompatible objects to ones that satisfy its contract.

RabbitMQ: Who Creates the Queues and Exchanges?

Messaging is a fundamental part of any distributed architecture. It allows a publisher to send a message to any number of consumers, without having to know anything about them. This is great for truly asynchronous and decoupled communications.

The above diagram shows a very basic but typical setup you would see when using RabbitMQ. A publisher publishes a message onto an exchange. The exchange handles the logic of routing the message to the queues that are bound to it. For instance, if it is a fanout exchange, then a copy of the same message would be duplicated and placed on each queue. A consumer can then read messages from a queue and process them.

An important assumption for this setup to work is that when publishers and consumers are run, all this RabbitMQ infrastructure (i.e. the queues, exchanges and bindings) must already exist. A publisher would not be able to publish to an exchange that does not exist, and neither could a consumer take messages from an inexistent queue.

Thus, it is not unreasonable to have publishers and/or consumers create the queues, exchanges and bindings they need before beginning to send and receive messages. Let’s take a look at how this may be done, and the implications of each approach.

1. Split Responsibility

To have publishers and consumers fully decoupled from each other, ideally the publisher should know only about the exchange (not the queues), and the consumers should know only about the queues (not the exchange). The bindings are the glue between the exchange and the queues.

One possible approach could be to have the publisher handle the creation of the exchange, and consumers create the queues they need and bind them to the exchange. This has the advantage of decoupling: as new queues are needed, the consumers that need them will simply create and bind them as needed, without the publisher needing to know anything about them. It is not fully decoupled though, as the consumers must know the exchange in order to bind to it.

On the other hand, there is a very real danger of losing messages. If the publisher is deployed before any consumers are running, then the exchange will have no bindings, and any messages published to it will be lost. Whether this is acceptable is something application-dependent.

2. Publisher Creates Everything

The publisher could be configured to create all the necessary infrastructure (exchanges, queues and bindings) as soon as it runs. This has the advantage that no messages will be lost (because queues will be bound to the exchange without needing any consumers to run first).

However, this means that the publisher must know about all the queues that will be bound to the exchange, which is not a very decoupled approach. Every time a new queue is added, the publisher must be reconfigured and redeployed to create it and bind it.

3. Consumer Creates Everything

The opposite approach is to have consumers create exchanges, queues and bindings that they need, as soon as they run. Like the previous approach, this introduces coupling, because consumers must know the exchange that their queues are binding to. Any change in the exchange (renaming, for instance) means that all consumers must be reconfigured and redeployed. This complexity may be prohibitive when a large number of queues and consumers are present.

4. Neither Creates Anything

A completely different option is for neither the publisher nor the consumer to create any of the required infrastructure. Instead, it is created beforehand using either the user interface of the Management Plugin or the Management CLI. This has several advantages:

  • Publishers and consumers can be truly decoupled. Publishers know only about exchanges, and consumers know only about queues.
  • This can easily be scripted and automated as part of a deployment pipeline.
  • Any changes (e.g. new queues) can be added without needing to touch any of the existing, already-deployed publishers and consumers.

Summary

Asynchronous messaging is a great way to decouple services in a distributed architecture, but to keep them decoupled, a valid strategy for maintaining the underlying messaging constructs (in the case of RabbitMQ, these would be the queues, exchanges and bindings) is necessary.

While publisher and consumer services may themselves take care of creating what they need, there could be a heavy price to pay in terms of initial message loss, coupling, and operational maintenance (in terms of configuration and deployment).

The best approach is probably to handle the messaging system configuration where it belongs: scripting it outside of the application. This ensures that services remain decoupled, and that the queueing system can change dynamically as needed without having to impact a lot of existing services.

.NET Core 2.0: Referencing .NET Framework Libraries: A Topshelf Experiment

Referencing .NET Framework Libraries in .NET Core 1.1

There are many old libraries targeting the .NET Framework which, for various reasons, do not yet target .NET Core or .NET Standard. Topshelf, a fantastic library that helps you to easily create Windows services, is one of these. At the time of writing this article, the last release of Topshelf was 4.0.3 back in October 2016. There was no way that Topshelf could target .NET Standard because the .NET Standard spec did not support the APIs that are required for it to function.

This was a problem because there was no way you could create a Windows service using Topshelf for a .NET Core 1.1 application. Simply trying:

Install-Package Topshelf

…is bound to fail miserably:

If you want to make a Windows service out of an application targeting .NET Core 1.1, then you have to use an alternative such as NSSM.

What Changed in .NET Core 2.0

With the release of .NET Core 2.0 and .NET Standard 2.0, applications or libraries targeting either of these are able to reference old libraries targeting the full .NET Framework. Presumably this is because the .NET Core/Standard 2.0 implementations have enough API coverage to overlap with what the full framework was able to offer.

Quoting the .NET Core/Standard 2.0 announcement linked above:

“You can now reference .NET Framework libraries from .NET Standard libraries using Visual Studio 2017 15.3. This feature helps you migrate .NET Framework code to .NET Standard or .NET Core over time (start with binaries and then move to source). It is also useful in the case that the source code is no longer accessible or is lost for a .NET Framework library, enabling it to be still be used in new scenarios.

“We expect that this feature will be used most commonly from .NET Standard libraries. It also works for .NET Core apps and libraries. They can depend on .NET Framework libraries, too.

“The supported scenario is referencing a .NET Framework library that happens to only use types within the .NET Standard API set. Also, it is only supported for libraries that target .NET Framework 4.6.1 or earlier (even .NET Framework 1.0 is fine). If the .NET Framework library you reference relies on WPF, the library will not work (or at least not in all cases). You can use libraries that depend on additional APIs,but not for the codepaths you use. In that case, you will need to invest singificantly in testing.”

Example with Topshelf

In order to actually test this out, you’ll need to have Visual Studio 15.3 or later. You will also need to separately install the .NET Core 2.0 SDK.

In an earlier section, we tried installing Topshelf in a .NET Core 1.1 application, and failed. Let’s try doing the same thing with a .NET Core 2.0 application:

Install-Package Topshelf

The package installation works pretty well:

However, the warning that shows under the dependency is not very promising:

There’s only one way to find out whether this will actually work in practice.

Let’s steal the code from the Topshelf quickstart documentation:

public class TownCrier
{
    readonly Timer _timer;
    public TownCrier()
    {
        _timer = new Timer(1000) {AutoReset = true};
        _timer.Elapsed += (sender, eventArgs) => Console.WriteLine("It is {0} and all is well", DateTime.Now);
    }
    public void Start() { _timer.Start(); }
    public void Stop() { _timer.Stop(); }
}

public class Program
{
    public static void Main()
    {
        HostFactory.Run(x =>                                 //1
        {
            x.Service<TownCrier>(s =>                        //2
            {
               s.ConstructUsing(name=> new TownCrier());     //3
               s.WhenStarted(tc => tc.Start());              //4
               s.WhenStopped(tc => tc.Stop());               //5
            });
            x.RunAsLocalSystem();                            //6

            x.SetDescription("Sample Topshelf Host");        //7
            x.SetDisplayName("Stuff");                       //8
            x.SetServiceName("Stuff");                       //9
        });                                                  //10
    }
}

Nope, looks like Topshelf won’t work even now.

I guess the APIs supported by .NET Core 2.0 still do not have enough functionality for Topshelf work as-is. Other .NET Framework libraries may work though, depending on the dependencies they require. In the “.NET Core 2.0 Released!” video, one of the demos shows SharpZipLib 0.86 (last released in 2011) being installed in an ASP .NET Core 2.0 application. It is shown to build, but we don’t get to see whether it works at runtime.

It is still early, and I suppose we have yet to learn more about the full extent of support for .NET Framework libraries from .NET Core 2.0 applications and .NET Standard 2.0 libraries. The problem is that when evaluating a third-party library such as Topshelf, it’s difficult to determine whether its own dependencies fall within the .NET Standard API set. This looks to me like a matter of pure trial and error.

The Abysmal State of the Web in June 2017

This will be the last article in the Sorry State of the Web series (at least for the time being). The idea was to learn from the mistakes of other so-called ‘professional’ websites, ranging from silly oversights to illegal practices. Hopefully, the silliness encountered has also made some people smile.

However, with 11 articles over 6 months, I believe I’ve made my point enough times over. Despite all the technological advancements, the web is in a state that I can call sick at best, and that is mainly the result of clueless developers. I have some slight hope that things may get better, but given that most of the issues I pointed out have not been addressed to date, that hope is realistically very slim.

From my part, I want to focus less on beating a dead horse and more on learning technology and writing high quality articles. I don’t exclude revisiting this series in future if I feel it’s worth it though. Once again, I extend my heartfelt thanks to all those who have contributed entries for this article and the ones before it.

Banif: Random Virtual Keyboard

If you think that the mainstream banks in Malta have terrible websites (and recently I covered how Mediterranean Bank’s newly launched online investment platform took them several steps back), then you should really take a look at Banif Bank Malta.

To log into their online banking section, you have to enter a username and a password. This would be understandable, if not for the fact that the password field is disabled so you can’t actually type into it. Instead, you have to click on keys on a virtual keyboard. To make matters worse, this is not your usual QWERTY keyboard: the key placements are randomised.

Let’s consider a few reasons why this is a terrible idea:

  • It makes it a lot harder for users to type in their password (in terms of user experience).
  • It slows down password entry, both because one has to use the mouse vs the keyboard and because the random placement requires the eye to look for keys as opposed to using muscle memory. This makes it easier for people watching you enter the password to identify what you are actually entering, and it also makes you more likely to pick simpler passwords.
  • People looking over your shoulder can easily see what key the cursor is on, which defeats the purpose of password field obfuscation.
  • The restrictions on the password field are client-side and trivial to disable. This does no favours for server-side security, which should really be the main focus.
  • You cannot use a password manager.

Since I’m not a security expert, I presented this case to the community at Information Security Stack Exchange. From there, I got to two related existing questions:

It seems that the main reason why this horrendous technique is used is to counteract keyloggers, which at a basic level can’t track keypresses (since they are not happening) or mouse clicks (since the placement of keys on the screen changes).

However, as one of the best answers points out, this is merely an arms race between the bank and attackers. It’s a vicious circle in which attackers and banks take it in turns to step up their game. The end result is that customers are the ones paying the price, by having to deal with ridiculous security measures like this.

Dealing with keyloggers is hardly an excuse for this kind of rubbish. There are much more robust and orthodox ways of dealing with this sort of thing, such as one-time passwords or two-factor authentication.

Insecure Logins

One of the most common issues we’ve seen throughout this series is that of websites with login forms where the credentials are not transmitted over HTTPS. Thus it is not hard for them to be intercepted and read in clear text. Keeping up with tradition, we have a list of such examples this month.

We can start with American Scientist, which I see has since undergone a complete redesign and does currently use HTTPS for the whole website (including login). This is how it was just a couple of weeks ago:

Then we have the Malta Chamber of Advocates, which aside from very ridiculously presenting a homepage with no content whatsoever, is just another case of insecure login:

But wait! The next one, ironically, is from none other than Bank Info Security:

Then we have Great Malta (whatever that is supposed to mean):

Local newspaper The Malta Independent is no less guilty:

…and neither is Infobel:

In another case if irony, we can look at J. Grima & Co. Ltd. They are “Security & Fire Specialists”, but web security is clearly not one of their areas of expertise.

Excitable Web

I was very excited (!) to come across Excitable Web, because it is a prime example of the clueless developers I was mentioning earlier. It is of little importance that each time you load a page, the page seems to render without CSS for half a second before rendering properly; because we’ll focus on more interesting stuff here. If you click on the “Who We Are” link, we get this:

You can see there are a couple of MySQL errors displaying directly in the page due to deprecated code. Such an experienced professional should know that server-side errors should never be displayed directly to the visitor, as this may reveal vulnerabilities among other things.

These errors seem to have been fixed since then, so we’ll move onto the next thing: the writing. It’s really generous of the webmaster to give us:

“A Breif [sic] Background On With [sic] Whome [sic] You Are Dealing With”

You can find other such gems within the content itself. Thank you, Adrian. Now we really know who we are dealing with.

For extra points, spot one of my own blunders within that screenshot!

Flybussen Translations

Here’s a tiny oversight from Norwegian operator Flybussen. While their site has an English version, their calendar unfortunately doesn’t:

JobsPlus Going Below Minimum Wage

JobsPlus has by now become a regular in this series. Those who believe that we should have equal pay for equal work (which is a legal requirement, by the way) will be delighted to see this vacancy where the position advertises a salary range of between EUR4,500 and EUR70,000. What’s even funnier, though, is that EUR4,500 is actually below the minimum wage (another legal requirement) for a 40-hour full-time work week.

Legal requirements aside, this is just a case of missing validation by our award-winning friends at JobsPlus who should have a central role in avoiding precarious work and exploitation.

Kelly on Yellow Pages

If you take a look at the Yellow Pages entry for Kelly Industries, you’ll come to the conclusion that they have enough business to not give a rat’s ass about what potential customers think about their brand.

Creativity Centre

I’ve received reports about issues with the Malta’s National Centre for Creativity‘s payment processing engine, but I haven’t been able to verify them without actually attempting to make a purchase. However, I did notice this problem with the checkout button actually not being properly visible if you’re using a laptop (and thus a limited screen resolution):

For a National Centre for Creativity, I must also point out that they didn’t quite put a lot of creativity into the website’s design.

Mixed Content

Another common problem we’ve seen throughout the series is that of using HTTPS, but serving some content over HTTP. This is called Mixed Content, and it invalidates the trust guaranteed by a fully HTTPS website.

This month, we have Malta Gift Service (also guilty of using Comic Sans for their main header):

…and our dear friends at Scan:

Apostrophes of Doom

Given that my surname contains an apostrophe, this often makes it a pain to deal with validation that unreasonably decides that an apostrophe is an invalid character. I’ve written about this especially in the original “The Sorry State of the Web in 2016“. There is no real reason to not accept apostrophes if you’re using proper practices (e.g. using prepared statements) to prevent SQL injection.

Unfortunately, Microsoft has decided that my surname cannot have an apostrophe:

I suppose I will need to remove the apostrophe from my identity card if I want to ever get a job at Microsoft.

Piscopo Gardens

The Piscopo Gardens website has been down for I don’t know how long due to some internal server error.

Aron isn’t doing a very good job at keeping the site up and running.

Robert Half

Swiss recruiter Robert Half believes that “It’s time we all work happy.™” (so much that a trademark was apparently filed).

That obviously doesn’t apply to their own website, which clearly doesn’t work if you enter “.net” in the search field:

Now I understand the name. Their website only Half works.

Ryanair Mischief

We noticed a couple of things on Ryanair’s website that are more sneaky practices than examples of bad web design per se.

First, there’s the newsletter checkbox that is opt-out rathern than opt-in (i.e. it automatically signs you up if you ignore it and leave it unchecked):

Then there’s this appeal to fear the middle seat:

Oh dear, not the middle seat!

Image credit: Taken from Wikipedia

Better to go for a team-building treasure hunt in 35-degrees-Celsius weather with a laptop on my back than be stuck in a middle seat! Actually, no. Give us a break, Ryanair.

Conclusion

I am happy to have managed to raise awareness about bad practices in web design with this series. I know this because I have heard several reports of companies that I have pissed off. I am a lot less happy that these companies have not really done much about it despite all this. That is their problem now. No doubt others have learned from the countless issues pointed out.

Let’s continue to make companies with a web presence understand that such a public face requires a high level of professionalism, and that they will lose business if they don’t step up their game.

Once again I would like to thank all the contributors to this series, and also the readers who have loyally followed it.