All posts by Gigi

Analysing Binary Files using xxd

When you’re trying to make sense of a binary file format, a good hex viewer or hex editor is an invaluable tool. As shown in “Ultima 1 Reverse Engineering: Decoding Savegame Files“, a typical workflow involves viewing a hex dump of the binary data, making some small change, taking a second hex dump, and comparing the differences. If you’re lucky, you might even be able to observe patterns in the data directly.

Tandy 1000 graphics data for the space fighters in Ultima 1 can be easily observed in a hex editor such as Okteta.

While a good visual hex editor (such as Okteta under Linux or xvi32 under Windows) is essential to view the binary data in hex and also make direct changes to the data files, a command-line hex viewer for Linux called xxd also exists. As with most things in a Linux environment, a lot of power comes from being able to combine command line tools in a way that produces the results we want. As we shall see, one of the benefits is that although most hex editors don’t have tools to compare hex dumps, we can leverage existing command-line diff tools for Linux.

Reverse Engineering the Ultima 1 Savegame File Format

We’ve already seen in “Ultima 1 Reverse Engineering: Decoding Savegame Files” how to analyse how the bytes in the Ulltima 1 savegame files change in response to actions in the game world. Let’s see how we could do this more easily using Linux command-line tools.

First, we start a new game, and take note of a few things such as the player’s position and statistics:

Ultima 1 when starting a new game.

We can use xxd take a hex dump of the savegame, which by default goes to standard output:

A hex dump of an Ultima 1 savegame file

By redirecting the output to a file (which we’re arbitrarily calling before.hex), we can save this for later comparison:

xxd PLAYER1.U1 > before.hex

Next, we move a couple of spaces to the right and save the game. Again, we take note of the situation in the game world (i.e. that we have moved to the right, and food has decreased by 1):

Ultima 1 after moving a couple of spaces to the right.

We can now take a new hex dump:

xxd PLAYER1.U1 > after.hex

Now that we have a record of the savegame before and after having moved, we can compare the two dumps using popular Linux diff tools such as diff or vimdiff:

vimdiff before.hex after.hex
Comparing hex dumps with vimdiff

In this case, simply moving two steps to the right has changed four different things in the savegame file: the player’s position, food, move count, and tile in the overworld map. It takes a bit more patience to reduce the number of variables at play and come to some conclusions about what the bytes actually represent, but you can hopefully appreciate how well these command-line tools play together.

Analysing The Savage Empire Dialogues

The Ultima 1 savegame is particularly easy to analyse and compare because it’s got a fixed size of 820 bytes, and each field in the game state has a fixed place within that file. Not all binary files provide this luxury.

For instance, the dialogues of The Savage Empire are LZW-compressed and are written using a proprietary scripting language. However, we can still use command-line tools to extract some interesting insights.

Using tools from the Nuvie project, you can extract each character’s dialogue into a separate binary file, numbered from 0 to 76 with some gaps. We can thus write a simple loop in bash syntax from 0 to 76 and invoke xxd on each file, using the parameters -l 16 to print out only the first 16 octets:

for i in $(seq -f "%03g" 0 76)
do
    echo -n "$i "; xxd -l 16 "path_to_dialogue_files/$i.dat"
done

The result is that we can identify an increasing NPC number as well as the NPC’s name and a part of their description within those first few bytes, indicating that although the structure of the data may be complex, there is still a deterministic pattern to it:

First few bytes of the first several NPC dialogues in The Savage Empire.

Conclusion

Whether analysing binary files using hex tools is your thing or not, I hope at this stage you can appreciate how much you can get out of combining a few simple command-line tools together.

Quick Mount in DOSBox under Linux

DOSBox is incredibly handy to run old games. In “DOSBox for Dummies“, I covered basic usage, as well as how to write a Windows batch file to automate some of the more repetitive operations (such as mounting). I also explained how to extend this to games requiring a CD in “Running Games Requiring a CD in DOSBox“.

If your games are all under the same folder, you might want to consider automatically mounting your DOS games folder using a dosbox.conf file. Otherwise, you can resort to scripting via batch files (Windows) or shell scripts (Linux).

For those one-off situations where you just want to try out a game quickly without setting anything up, regardless of where it resides on the filesystem, you can run the following (in Linux) from the folder where your game is:

dosbox -c "mount c $(pwd)" -c "C:"

This is the same method I used in previous articles to pass commands to DOSBox. The only difference is that here I’m taking advantage of command substitution in bash (as seen in “Scripting Backups with bash on Linux“) to pass in the current directory via the pwd command. That way, no matter where your game folder is on the filesystem, DOSBox will start in the right location. Then, all you’ll need to do is invoke the right executable.

Happy retro gaming!

SDL2 Drag and Drop

Hi everyone! It’s been a while since my last SDL2 article (see all SDL2 articles). Today, I’m going to show how you can use the mouse to drag an object across an SDL2 window.

Drag and drop is nowadays a standard feature of any Multiple Document Interface (MDI). Although MDI as a user interface layout has fallen out of fashion in mainstream applications (even GNOME seems to think we don’t need it), it has enabled so many things ranging from windows in operating system GUIs to inventory containers in games.

The source code for this article is available in the SDL2DragDrop folder at the Gigi Labs BitBucket repository.

Note: any & in the code should be just an &. Sorry about that. I’m still waiting for this to be fixed.

Displaying an Empty Window

Let’s start out by displaying an empty window. We can use the code below for this:

#include <SDL2/SDL.h>
 
int main(int argc, char ** argv)
{
    // variables
    
    bool quit = false;
    SDL_Event event;
    
    // init SDL
    
    SDL_Init(SDL_INIT_VIDEO);
    SDL_Window * window = SDL_CreateWindow("SDL2 Drag and Drop",
        SDL_WINDOWPOS_UNDEFINED, SDL_WINDOWPOS_UNDEFINED, 640, 480, 0);
    SDL_Renderer * renderer = SDL_CreateRenderer(window, -1, 0);
    
    // handle events
    
    while (!quit)
    {
        SDL_Delay(10);
        SDL_PollEvent(&amp;event);
    
        switch (event.type)
        {
            case SDL_QUIT:
                quit = true;
                break;
        }
    
        SDL_SetRenderDrawColor(renderer, 242, 242, 242, 255);
        SDL_RenderClear(renderer);
        
        SDL_RenderPresent(renderer);
    }
    
    // cleanup SDL
    
    SDL_DestroyRenderer(renderer);
    SDL_DestroyWindow(window);
    SDL_Quit();
    
    return 0;
}

Most of this should be familiar from “Showing an Empty Window in SDL2“, but there are a few differences:

  • The #include on the first line is different from that of many of my previous SDL2 articles. That’s because I’m now using SDL2 on Linux (see “How to Set Up SDL2 on Linux“).
  • We’re using SDL_RenderClear() to give the window a background colour (as we’ve done in a few earlier articles).
  • We’re using SDL_PollEvent() to check for events all the time (combined with SDL_Delay() to occasionally give the CPU a break). This is also something we’ve done a few times before.

On Linux, assuming my file is called main.cpp, I can compile this from the command-line as follows:

g++ main.cpp -lSDL2 -lSDL2main -o sdl2exe

Just to be super clear (as this has been a point of confusion in earlier articles), I’m using the C++ compiler. We’ll be using a couple of C++ features, so please don’t try to compile this article’s code as-is with a C compiler.

Once this compiles successfully, I can run the executable that was produced as a result:

./sdl2exe

…and I get an empty window:

Adding Rectangles

We’re going to need something we can drag around, so let’s add a couple of rectangles. Again, we’ve done this before (see “SDL2 Bounding Box Collision Detection“), but we’ll improve a little by storing our rectangles in a C++ STL list.

First, add the necessary #include at the top of the file:

#include <list>

With this in place, we can now create a couple of SDL_Rects and put them in a list. The following code goes before the event loop.

    SDL_Rect rect1 = { 288, 208, 100, 100 };
    SDL_Rect rect2 = { 50, 50, 100, 80 };
    
    std::list<SDL_Rect *> rectangles;
    rectangles.push_back(&amp;rect1);
    rectangles.push_back(&amp;rect2);

Towards the end of the event loop, we can add code to draw those rectangles. Since we’re using a list, we can add more rectangles if we want, and this part of the code isn’t going to change. Note that I’m also using a little C++11 shortcut to iterate through the list.

        SDL_SetRenderDrawColor(renderer, 242, 242, 242, 255);
        SDL_RenderClear(renderer);
        
        for (auto const&amp; rect : rectangles)
        {
            SDL_SetRenderDrawColor(renderer, 0, 255, 0, 255);
            SDL_RenderFillRect(renderer, rect);
        }
        
        SDL_RenderPresent(renderer);

If we compile and run this now, we get a couple of green rectangles:

Changing Colour on Click

Before we get to actually dragging those rectangles, we need a way to select one when it gets clicked, both in code and visually. For this, we’ll add a selectedRect variable somewhere before the event loop:

SDL_Rect * selectedRect = NULL;

By default, no rectangle is selected, which is why this is set to NULL.

We also need a couple more variables somewhere at the beginning of the program to help us keep track of mouse events:

    bool leftMouseButtonDown = false;
    SDL_Point mousePos;

In the switch statement within the event loop, we can now start adding mouse event handlers. We’ll start with one for SDL_MOUSEMOTION, which keeps track of the mouse coordinates in the mousePos variable we just declared:

            case SDL_MOUSEMOTION:
                mousePos = { event.motion.x, event.motion.y };
                break;

Adding another event handler for SDL_MOUSEBUTTONUP, we clear the leftMouseButtonDown flag and the selected rectangle when the left mouse button is released:

            case SDL_MOUSEBUTTONUP:
                if (leftMouseButtonDown &amp;&amp; event.button.button == SDL_BUTTON_LEFT)
                {
                    leftMouseButtonDown = false;
                    selectedRect = NULL;
                }
                break;

Finally, we add another event handler for SDL_MOUSEBUTTONDOWN, which uses SDL_PointInRect() to identify the rectangle being clicked (if any), and sets it as the selected rectangle:

            case SDL_MOUSEBUTTONDOWN:
                if (!leftMouseButtonDown &amp;&amp; event.button.button == SDL_BUTTON_LEFT)
                {
                    leftMouseButtonDown = true;
                    
                    for (auto rect : rectangles)
                    {
                        if (SDL_PointInRect(&amp;mousePos, rect))
                        {
                            selectedRect = rect;
                            break;
                        }
                    }
                }
                break;

At this point, we can add some conditional logic in the rendering code to draw the selected rectangle (i.e. the one being clicked) in blue instead of green:

        SDL_SetRenderDrawColor(renderer, 242, 242, 242, 255);
        SDL_RenderClear(renderer);
        
        for (auto const&amp; rect : rectangles)
        {
            if (rect == selectedRect)
                SDL_SetRenderDrawColor(renderer, 0, 0, 255, 255);
            else
                SDL_SetRenderDrawColor(renderer, 0, 255, 0, 255);
            
            SDL_RenderFillRect(renderer, rect);
        }
        
        SDL_RenderPresent(renderer);

If we compile and run this now, we find that rectangles become blue when clicked:

Drag and Drop

That might have seemed like a lot of work just to highlight a rectangle when clicked, but in fact we have already implemented much of what we need for drag and drop. To finish the job, we’ll start by adding a new variable near the beginning of the program:

SDL_Point clickOffset;

This clickOffset variable will store the point within the rectangle (relative to the rectangle’s boundary) where you clicked, so that as we move the rectangle by dragging, we can keep that same spot under the mouse pointer.

In fact, when the left mouse button is pressed, we will now store this location so that we can use it later in the code:

            case SDL_MOUSEBUTTONDOWN:
                if (!leftMouseButtonDown &amp;&amp; event.button.button == SDL_BUTTON_LEFT)
                {
                    leftMouseButtonDown = true;
                    
                    for (auto rect : rectangles)
                    {
                        if (SDL_PointInRect(&amp;mousePos, rect))
                        {
                            selectedRect = rect;
                            clickOffset.x = mousePos.x - rect->x;
                            clickOffset.y = mousePos.y - rect->y;
                            
                            break;
                        }
                    }
                }
                break;

Then, while the mouse is moving, we update the selected rectangle’s position accordingly:

            case SDL_MOUSEMOTION:
                {
                    mousePos = { event.motion.x, event.motion.y };
                    
                    if (leftMouseButtonDown &amp;&amp; selectedRect != NULL)
                    {
                        selectedRect->x = mousePos.x - clickOffset.x;
                        selectedRect->y = mousePos.y - clickOffset.y;
                    }
                }
                break;

…and that is all that is needed to allow the user to click and drag those rectangles:

Wrapping Up

In this article, we’ve mostly built on concepts covered in earlier articles. By manipulating mouse events, we were able to add interactivity to rectangles. They change colour when clicked, and can be dragged around using the mouse.

There are a few things you’ll notice if you’re diligent enough:

  • If you drag the rectangles around really fast, they lag a little behind the mouse pointer. I’m not really sure how to fix that.
  • There isn’t really a proper z-index implementation, so it looks bizarre when you can drag a rectangle underneath another. This can probably be fixed by changing the order in which the rectangles are rendered, so that the selected one always appears on top of the rest.
  • Using a list is okay for a few items, but if you have a lot of objects in your window, you might want to use a more efficient data structure such as a quadtree.

OData Session at Virtual APIdays Helsinki 2020

With RedisConf2020 done and dusted, my next stop is API Days Helsinki 2020. Like RedisConf, this is a conference that got its plans a little messed up by COVID19. Instead of the originally planned conference, they’re doing a virtual event in June, and (hopefully) a physical one in September.

I’m going to be presenting a session on OData on 2nd June, during the virtual event. If you haven’t heard about it before, you can think of it as something that tried to solve similar problems to what GraphQL did, but much earlier. GraphQL is not OData, as some will be quite willing to point out, and I’ll be explaining some of the tradeoffs between the two.

RedisGraph Session at RedisConf 2020 Takeaway

RedisConf 2020 was originally planned to be held in San Francisco. The sudden spread of the COVID19 pandemic, however, meant that the organisers, like many others, had to take a step back and reconsider whether and how to run this event. In fact, the decision was taken in March for it to be held as a virtual event.

RedisConf 2020 will thus be an online conference taking place on the 12th and 13th of May 2020. The schedule is packed with talks, the first day being breakout sessions, and the second being full of training provided by the folks at Redis Labs.

Among the talks presented on the first day, you’ll find my own A Practical Introduction to RedisGraph, which introduces the relatively new RedisGraph graph database, and demonstrates what you can do with it via practical demonstration.

If you’d like to learn more about RedisGraph, check out also my two articles, First Steps with RedisGraph and Family Tree with RedisGraph, published last November.

The Sorry State of Tourism in Ireland

I first visited Ireland around this time eight years ago, for St. Patrick’s Day 2012. It did not take me long to fall in love with the place. Since then, I have revisited Ireland other times, lived there for about a year and a half, and been around most of the country. As a result, my Irish experience has been a mixture of thrills and disappointments.

Separate hot and cold water taps (when hot water is actually available) is a disease more prevalent in Ireland than the Coronavirus.

When I recently revisited Ireland around the same time that the Coronavirus outbreak started, I once again had mixed feelings. Many things were really nice, but I wasn’t spared any disappointments.

As part of the Sorry State of the Web series, in which I promote good web development practices by illustrating bad ones, I will focus on websites (and other technology services) I came across during my research for this trip. Other things that annoyed me, such as cafes charging you an extra 2 Euros just to toast your sandwich, will be out of scope.

Aran Islands

The Aran Islands may be beautiful, but their website could have been better.

In fact, they did make it better by fixing this problem with ampersand HTML entities showing within the page.

Insecure WiFi at Penneys

Penneys, the chain of department stores that you might otherwise recognise as Primark in the UK, offers free WiFi to their customers.

Unfortunately, given that you need to join the WiFi via an endpoint that does not come with a proper SSL certificate, it is not only useless, but plain risky for customers to use.

Secret Valley Wildlife Park

The Secret Valley Wildlife Park website has a number of issues.

For starters, some of the links at the bottom (i.e. Terms & Conditions, Privacy, and Cookies) don’t work. The cursor doesn’t even turn into a pointer, and if you look at the HTML, it seems they put anchor tags without href attributes.

On the Animals page, images take ages to load because they used huge images in the page without using thumbnails (see also: The Shameful Web of April 2017 (Part 1)). If you’re including large images in a page, always use small versions and link to the larger version.

There also seems to be a problem with HTTPS… we’ll get to that too.

Going on the online booking system (which is what we care about when it comes to HTTPS, since sensitive information is involved), we see that HTTPS looks okay so far. They also used to have a test ticket type that I’m happy to see has been removed. In fact, they recently updated this page with a plea for funds since Coronavirus is messing up their business (understandably).

Unfortunately, when you proceed to the next step and are about to book a ticket, the connection suddenly isn’t secure any more. It’s a small mixed content problem because of an image, but the problem is that it undermines the trust that people have in such websites (when it comes to keeping their sensitive financial data secure), and can potentially have security-related consequences.

So while I sympathise with Secret Valley (and so many others affected by the Coronavirus), it’s also important to keep your data safe. By all means, send them money, but do it using alternative, secure means.

The M50 Toll

If you’re going to be renting a car in Dublin and using it to drive around the country, one of the things you’re going to have to do is pay the toll on the M50 motorway. The M50 uses a barrier-free toll system that can be paid online by 8pm on the next day.

While the close deadline is a little annoying, being able to pay it online is quite convenient… when it works.

In this case, the system just didn’t want to work, although I tried several times. This can happen, but what is a little worrying here is that I don’t think those details about the error (the XML-like thing) should be disclosed to the customer.

Blackrock Castle Observatory

If you like science, then Blackrock Castle Observatory is a great place to visit. They have a lot of interactive exhibits that explain concepts from astronomy and science in general:

Wait… what’s that at the bottom-right, where the arrow is pointing? Let’s take a closer look:

Uh oh… someone didn’t activate Windows! That’s quite embarrassing, and can be seen on several of their exhibits.

Wrap Up

Although Ireland will always have a special place in my heart, it hasn’t spared me any disappointments, both in terms of the service I received in various places as a tourist, but also on websites and other technology-related services.

This article, like others in the same series, is an educational exercise aimed at improving technology standards, especially on the web which so many people come in contact with. The aim is to learn from this and provide a better service, so I hope that nobody is offended, particularly in this difficult time.

Instead, I hope that in such times, when we depend on technology so much more, we can overcome these obvious problems and use technology safely and reliably to reduce the burden of living in a difficult situation as much as possible.

With the Coronavirus currently devastating health, economy, tourism and peace of mind across the world, we need to be safe, help each other, and show empathy because so many people are affected in different ways.

The Sorry State of Buying a Mobile Phone in Malta

A few years ago, I ran the Sorry State of the Web series of articles to promote good web design/development practices by pinpointing shameful ones that should be avoided (an approach inspired by Web Pages That Suck).

Websites today are very different from when Vincent Flanders started Web Pages That Suck. Things like Mystery Meat Navigation are almost gone entirely, as modern websites embrace more minimal designs and are often built on foundations such as Bootstrap or Material Design.

However, after a series of very frustrating experiences today while trying to buy a mobile phone, I am convinced that the state of professionally-built websites has not really improved. Websites may have converged to similar designs that overall are less painful, but the user experience is still miserable because of a lack of professionalism.

As a result, although I would have preferred not to continue this series, I feel there is still value in doing so. In this article, we will focus on websites of companies that sell mobile phones in Malta, where the technology and customer service are both still very medieval.

Sound Machine

Let’s start with Sound Machine. When you first visit this site, you get one of those cookie notices at the bottom-left. That’s pretty normal, especially in the GDPR era.

However, part of this notice sticks around even after you close it. It’s particularly noticeable if you scroll down so that the background is uniformly dark:

This is pretty strange, and probably unintended. But wait… do you notice something in that dark footer area? That’s right — this website was made by none other than Cyberspace Solutions, to which I had dedicated an entire article 3 years ago. I guess this explains a lot.

Another little mistake can be found in their Cookie Policy, where someone has been a little careless with their HTML tags:

But the worst blunder of all is that the Contact form does not even work:

In fact, when you press the Send button, a spinner runs next to it and never stops. There is no indication of the failure, unless you open the Developer Console, which most people obviously will not know how to do.

The result of this is a poor user experience, because (a) the form does not work, (b) there is no indication that anything failed, and, to make matters worse, (c) there is no email address given as an alternative. A customer therefore has no option other than to give them a call or show up in person, which many prefer to avoid for various reasons.

The takeaway from this is that when you build a website, you should always double-check to make sure things look right and that things actually work. Customers aren’t very happy when they don’t.

Direct Vision

Direct Vision has a nice e-commerce website where you can look for products and eventually buy them online. Let’s say I’m interested in the Samsung Galaxy A40… I get a lot of options:

Let’s take a look at the black phone on the left:

Great! It seems to be in stock!

Except that… it isn’t! It turns out that this phone is not available at all in one of their shops, and in the other, it’s only available in a couple of colours (Coral and White). The black one, as it turns out, is not in stock. They need to order it.

So why do they say that it is in stock when it isn’t? The salesgirl tried to give a dumb explanation, and also suggested I go with one of the other colours and get a cover to hide the undesired colour. Naturally, I didn’t buy that (pun intended). It’s truly shameful to waste people’s time in this way.

Tablets and More

Tablets and More is another consumer electronics store. Browsing around, it’s easy to notice a few things out of place. For instance, the thing at the bottom left that fails to load:

…and which, after a few seconds, becomes something else but still fails to explain what it’s supposed to be:

Even the product descriptions seem to be a real mess…

…in what appears to be a copy & paste job from GSM Arena:

What shall we say, then, about the creepy practices of harvesting people’s email addresses via the live chat feature (something that is becoming increasingly common in live chat products nowadays) or of not displaying prices and expecting people to get in touch to find out how much an item costs?

It’s almost as if this store is intentionally doing everything it can to keep customers away.

Phone Box

The minute you land at Phone Box, you can immediately tell that something is wrong:

If a site isn’t being served over HTTPS, then it’s possible for requests to be intercepted by a man in the middle and arbitrary responses served as a result, as Troy Hunt demonstrates in his article about HSTS. This is particularly risky for websites that require you to submit information, and Phone Box does indeed fall in this category:

As I’ve written ad nauseam throughout the Sorry State of the Web series, it is not okay to accept login credentials insecurely over HTTP. While other information being sent insecurely may or may not fall under GDPR and Data Protection laws, I think we would be a lot more comfortable if such details (such as one’s personal address) are not leaked to the world.

At least, this site does not take credit card details, since the only payment method available is cash upon delivery. Let’s hope they don’t decide to accept credit cards as a new feature.

Conclusion

Even from a small sample of websites, we have seen a range of issues going from simple negligent oversights to serious security problems and broken features. In 2020, businesses are still paying a lot of money for web design agencies to do a half-assed job. They probably do not realise how much business they are losing as a result.

How can we make things better? I have a few ideas.

  • Web design agencies: test your website’s functionality and content thoroughly. Get up to speed with the latest security and data protection requirements, as there may be legal repercussions if you don’t.
  • Businesses: choose very carefully who to work with when building a website. Take a look at their past work, and get a second opinion if you don’t feel you can evaluate it. Make it easy for customers to reach you and give them a good service. Otherwise, don’t complain that you are losing business to online marketplaces such as Amazon.
  • Customers: do not buy from businesses that have insecure websites, shady practices, or salespeople who think you’re stupid. Things will only change when they notice that their behaviour is detrimental to their own survival.

Managing ASP .NET Core Settings in Multiple Environments

One of the many benefits offered by .NET Core over its predecessor (the older .NET Framework) is the way it handles configuration. I wrote about the capabilities of .NET Core configuration back when the new framework was still prerelease and using a different name (see ASP .NET 5 Application Configuration, Part 1 and Part 2), and although some APIs may have changed, most of it should still be relevant today.

It is easy to set up application settings with this recent configuration model. However, when it comes to actually deploying an application into different environments (e.g. development, staging, production, and possibly others), things become complicated. How do we maintain configurations for all these environments, and how do we save ourselves from the tedious and error-prone practice of manually tweaking individual settings on all these different servers? How do we make sure we don’t lose these settings outright if a server experiences a technical failure? These challenges have nothing to do with the configuration model itself, as they are a more general administrative burden.

One option is to use something like Octopus Deploy to store settings for different environments and transform a settings file (such as appsettings.json) at deployment time. However, not everybody has this luxury. In this article, we will see how we can manage configurations for multiple environments using features that .NET Core offers out of the box.

At the time of writing this article, .NET Core 3.1.1 is the latest version.

Stacking Configurations in .NET Core

The .NET Core configuration libraries allow you to combine application settings from different sources, even if these are of different types (e.g. JSON, XML, environment variables, etc). Imagine I have these two JSON files, named appsettings1.json and appsettings2.json:

{
    "ApplicationName": "Some fancy app",
    "Timeout": 5000
}
{
    "Timeout": 3000,
    "ConnectionString": "some connection string"
}

In order to read these files, I’ll need to install the following package:

dotnet add package Microsoft.Extensions.Configuration.Json

We can then use the ConfigurationBuilder to read in both files, combine them, and give us back an IConfigurationRoot object that allows the application code to query the settings that were read:

using System;
using Microsoft.Extensions.Configuration;

namespace netcoreconf1
{
    class Program
    {
        static void Main(string[] args)
        {
            var config = new ConfigurationBuilder()
                .AddJsonFile("appsettings1.json")
                .AddJsonFile("appsettings2.json")
                .Build();

            Console.WriteLine("Application Name: " + config["ApplicationName"]);
            Console.WriteLine("Timeout:          " + config["Timeout"]);
            Console.WriteLine("Connection String " + config["ConnectionString"]);

            Console.ReadLine();
        }
    }
}

After ensuring that the two JSON files are set to copy to the output directory on build, we can run the simple application to see the result:

The Application Name setting comes from the first JSON file, while the Connection String setting comes from the second. The Timeout setting, on the other hand, exists in both files, but the value was obtained from the second JSON file. In fact, the order in which configuration sources are read is important, and by design, settings read from later sources will overwrite the same settings read from earlier sources.

It follows from this that if we have some variable that defines which environment (e.g. Production) we’re in, then we can do something like this:

            const string environment = "Production";

            var config = new ConfigurationBuilder()
                .AddJsonFile("appsettings.json")
                .AddJsonFile($"appsettings.{environment}.json", optional: true)
                .Build();

In this case we have a core JSON file with the settings that tend to be common across environments, and then we have one or more JSON files specific to the environment that we’re running in, such as appsettings.Development.json or appsettings.Production.json. The settings in the environment-specific JSON file will overwrite those in the core appsettings.json file.

You will notice that we have that optional: true parameter for the environment-specific JSON file. This means that if that file is not found, the ConfigurationBuilder will simply ignore it instead of throwing an exception. This is the default behaviour in ASP .NET Core, which we will explore in the next section. It is debatable whether this is a good idea, because it may be perfectly reasonable to prefer the application to crash rather than run with incorrect configuration settings.

Multiple Environments in ASP .NET Core Using Visual Studio

By default, ASP .NET Core web applications use this same mechanism to combine a core appsettings.json file with an environment-specific appsettings.Environment.json file.

In the previous section we used a constant to supply the name of the current environment. Instead, ASP .NET Core uses an environment variable named ASPNETCORE_ENVIRONMENT to determine the environment.

Let’s create an ASP .NET Core Web API using Visual Studio and run it to see this in action:

Somehow, ASP .NET Core figured out that we’re using the Development environment without us setting anything up. How does it know?

You’ll find the answer in the launchSettings.json file (under Properties in Solution Explorer), which defines the aforementioned environment variable when the application is run either directly or using IIS Express. You’ll also find that there are already separate appsettings.json and appsettings.Development.json files where you can put your settings.

If you remove this environment variable and re-run the application, you’ll find that the default environment is Production.

On the other hand, if we add a different appsettings.Staging.json, and update the environment variable to Staging, then we can run locally while pointing to the Staging environment:

Naturally, connecting locally to different environments isn’t something you should take lightly. Make sure you know what you’re doing, as you can do some real damage on production environments. On the other hand, there are times when this may be necessary, so it is a simple and powerful technique. Just be careful.

“With great power comes great responsibility.”

— Uncle Ben, Spider-Man (2002)

Multiple Environments in Console Apps

While ASP .NET Core handles the configuration plumbing for us, we do not have this luxury in other types of applications. Console apps, for instance those built to run as Windows Services using Topshelf, will need to have this behaviour as part of their code.

In a new console application, we will first need to add the relevant NuGet package:

dotnet add package Microsoft.Extensions.Configuration.Json

Then we can set up a ConfigurationBuilder to read JSON configuration files using the same stacked approach described earlier:

            var config = new ConfigurationBuilder()
                .AddJsonFile("appsettings.json")
                .AddJsonFile($"appsettings.{environment}.json")
                .Build();

We can read the environment from the same ASPNETCORE_ENVIRONMENT environment variable that ASP .NET Core looks for. This way, if we have several applications on a server, they can all determine the environment from the same machine-wide setting.

            string environment = Environment.GetEnvironmentVariable("ASPNETCORE_ENVIRONMENT");

            if (environment != null)
            {
                var config = new ConfigurationBuilder()
                    .AddJsonFile("appsettings.json")
                    .AddJsonFile($"appsettings.{environment}.json")
                    .Build();

                // TODO application logic goes here
            }
            else
            {
                Console.WriteLine("Fatal error: environment not found!");
                Environment.Exit(-1);
            }

If we run the application now, we will get that fatal error. That’s because we haven’t actually set up the environment variable yet. See .NET Core Tools Telemetry for instructions on how to permanently set an environment variable on Windows or Linux. Avoid doing this via a terminal or command line window since that setting would only apply to that particular window. I’m doing this in the screenshot below only as a quick demonstration, since I don’t need to maintain this application.

Deploying an ASP .NET Core Web Application to a Windows Server

When developing applications locally, we have a lot of tools that make our lives easy thanks to whichever IDE we use (e.g. Visual Studio or Visual Studio Code). Deploying to a server is different, because we need to set everything up ourselves.

The first thing to do is install .NET Core on the machine. Download the ASP .NET Core Hosting Bundle as shown in the screenshot below. This includes the Runtime (which allows you to run an .exe built with .NET Core) and the ASP .NET Core Module v2 for IIS (which enables you to host ASP .NET Core web applications in IIS). However, it does not include the SDK, so you will not be able to use any of the command-line dotnet tools, and even dotnet --version will not let you know whether it is set up correctly.

Next, we can set up a couple of system environment variables:

The first is ASPNETCORE_ENVIRONMENT which has already been explained ad nauseam earlier in this article. The second is DOTNET_CLI_TELEMETRY_OPTOUT (see .NET Core Tools Telemetry), which can optionally be used to avoid sending usage data to Microsoft since this behaviour is turned on by default.

Another optional preparation step that applies to web applications is to add health checks. This simply means exposing an unprotected endpoint which returns something like “OK”. It is useful to check whether you can reach the web application at a basic level (while eliminating complexities such as authentication), and it can also be used by load balancers to monitor the health of applications. This can be implemented either directly in code, or using ASP .NET Core’s own health checks feature.

Finally, you really should set up logging to file, and log the environment as soon as the application starts. Since ASP .NET Core does not have a file logger out of the box, you can use third party libraries such as NLog or Serilog. Like this, if the application picks up the wrong environment, you can realise very quickly. The log files will also help you monitor the health of your application and troubleshoot issues. Use tools such as baretail to monitor logs locally on the server, or ship them to a central store where you can analyse them in more detail.

With everything prepared, we can publish our web application:

dotnet publish -c Release -r win10-x64

All that is left is to copy the files over to the server (compressing and decompressing them in the process) and run the application.

The above screenshot shows the deployed ASP .NET Core web application running, serving requests, and picking up the correct configuration. All this works despite not having the .NET Core SDK installed, because it is not required simply to run applications.

Deploying an ASP .NET Core Web Application to IIS

In order to host an ASP .NET Core web application in IIS, the instructions in the previous section apply, but there are a few more things to do.

First, if the server does not already have IIS, then it needs to be installed. This can be done by going to:

  1. Server Manager
  2. Add roles and features
  3. Next
  4. Next
  5. Next
  6. Select Web Server (IIS) as shown in the screenshot below.
  7. Click Add Features in the modal that comes up.
  8. Next
  9. Next
  10. Next
  11. Next
  12. Install

In IIS, make sure you have the AspNetCoreModuleV2 module, by clicking on the machine node in the Connections panel (left) and then double-clicking Modules. If you installed IIS after having installed the ASP .NET Core Hosting Bundle, you will need to run the latter installer again (just hit Repair).

Next, go into IIS and set up a website, with the path pointing to the directory where you put the web application’s published files:

Start your website, and then visit the test endpoint. Since you don’t have a console window when running under IIS, the log files come in really handy. We can use them to check that we are loading configuration for the right environment just as before:

It’s working great, and it seems like from .NET Core 3+, it even logs the hosting environment automatically so you don’t need to do that yourself.

Troubleshooting

When running under IIS, an ASP .NET Core application needs a web.config file just like any other. While I’ve had to add this manually in the past, it seems like they are now being created automatically when you publish. If, for any reason, you’re missing a web.config file, you can grab the example in the docs.

I ran into a problem with an IIS-hosted application under .NET Core 2.2 where the environment variable defining the hosting environment wasn’t being picked up correctly by ASP .NET Core. As a workaround, it is actually possible to set environment variables directly in web.config, and they will be passed by IIS to the hosted application.

On the other hand, when running .NET Core applications under Linux, keep in mind that files are case sensitive. Andrew Lock has written about a problem he ran into because of this.

Summary

In this article, we have seen that the old way of transforming config files is no longer necessary. By stacking configuration files, we can have a core appsettings.json file whose settings is overwritten by other environment-specific JSON files.

This setup is done automatically in ASP .NET Core applications, using the ASPNETCORE_ENVIRONMENT environment variable to determine the current environment. In other types of apps, we can read the same environment variable manually to achieve the same effect. Under Visual Studio, this environment variable can easily be changed in launchsettings.json to work under different environments, as long as the necessary level of care is taken.

Deployment of ASP .NET Core applications requires the .NET Core Runtime to be installed on the target server. The ASP .NET Core Hosting Bundle includes this as well as support for hosting ASP .NET Core applications under IIS. The SDK is not required unless the dotnet command-line tools need to be used on the server.

Before deploying, the server should also have the right environment variables, and the application should be fitted with mechanisms to easily check that it is working properly (such as an open endpoint and log files).

Windows Server Sessions Consume Resources

If you work in a Windows environment, you have most likely had to log onto a Windows Server machine. Windows Server is used to host web applications (via IIS), manage a corporate network, and so much more.

Every time you log onto Windows Server, your profile is actually using a portion of the resources (e.g. CPU, RAM, disk and network utilisation) on that machine. It should not sound like a surprise that while some resources are used just to keep you logged on, the more processes you are running, the more resources you are using. So keeping things like Firefox or SQL Server Management Studio open can consume a significant portion of the server’s memory.

While it is understandable to log onto a server and utilise system resources for maintenance, troubleshooting or deployment purposes, many people do not realise that these resources are not released once they disconnect from the server. In fact, when you try to close a Remote Desktop Connection from the blue bar at the top, you get a warning that tells you this:

We can confirm this by opening the Users tab in the Task Manager, and see that logged in users who have disconnected are still using up a lot of memory (and other resources):

It is interesting to note that Sherry and Smith each have just Firefox open, with 3 or 4 tabs. You can imagine what the impact would be if there were more users each with more applications running.

In order to free up resources when you’re done working on the server, simply Sign Out instead of just disconnecting:

Once users have signed out, you’ll see that those disconnected sessions have disappeared from the Users tab in Task Manager, and their respective resources have been freed:

So there you go: this little tip can help you make the most of your server simply by not being wasteful of system resources.

Scripting Backups with bash on Linux

Over the years, Linux has gone from being something exclusively for the hardcore tech-savvy to something accessible to all. Modern GUIs provide a friendly means for anybody to interact with the operating system, without needing to use the terminal.

However, the terminal remains one of the areas where Linux shines, and is a great way for power users to automate routine tasks. Taking backups is one such mundane and repetitive activity which can very easily be scripted using shell scripts. In this article, we’ll see how to do this using the bash shell, which is found in the more popular distributions such as Ubuntu.

Simple Folder Backup with Timestamp

Picture this: we have a couple of documents in our Documents folder, and we’d like to back up the entire folder and append a timestamp in the name. The output filename would look something like:

Documents-2020.01.21-17.25.tar.gz

The .tar.gz format is a common way to compress a collection of files on Linux. gzip (the .gz part) is a powerful compression algorithm, but it can only compress a single file, and not an entire folder. Therefore, a folder is typically packaged into a single .tar file (historically standing for “tape archive”). This tarball, as it’s sometimes called, is then compressed using gzip to produce the resulting .tar.gz file.

The timestamp format above is something arbitrary, but I’ve been using this myself for many years and it has a number of advantages:

  1. It loosely follows the ISO 8601 format, starting with the year, and thus avoiding confusion between British (DD/MM) and American (MM/DD) date notation.
  2. For the same reason, it allows easy time-based sorting of backup files within a directory.
  3. The use of dots and dashes comes in handy when the filename is too long and is wrapped onto multiple lines. Related parts (e.g. the year, month and day of the date) stick together, but separate parts (e.g. the date and time) can be broken onto separate lines. Also, this takes up less (screen) space than other methods (e.g. using underscores).

To generate such a timestamp, we can use the Linux date command:

$ date
Tue 21 Jan 2020 05:35:34 PM UTC

That gives us the date, but it’s not in the format we want. For that, we need to give it a parameter with the format string:

$ date +"%Y.%m.%d-%H.%M.%S"
2020.01.21-17.37.14

Much better! We’ll use this shortly.

Let’s create a backups folder in the home directory as a destination folder for our backups:

$ mkdir ~/backups

We can then use the tar command to bundle our Documents folder into a single file (-c is to create, and -f is for file archive):

$ tar -cf ~/backups/Documents.tar ~/Documents

This works well as long as you run it from the home directory. But if you run it from somewhere else, you will see an unexpected structure within the resulting .tar file:

Instead of putting the Documents folder at the top level of the .tar file, it seems to have replicated the folder structure leading up to it from the root folder. We can get around this problem by using the -C switch and specifying that it should run from the home folder:

$ tar -C ~ -cf ~/backups/Documents.tar Documents

This solves the problem, as you can see:

Now that we’ve packaged the Documents folder into a .tar file, let’s compress it. We could do this using the gzip command, but as it turns out, the tar command itself has a -z switch that produces a gzipped tarball. After adding that and a -v (verbose) switch, and changing the extension, we end up with:

$ tar -C ~ -zcvf ~/backups/Documents.tar.gz Documents
Documents/
Documents/some-document.odt
Documents/tasks.txt

All we have left is to add a timestamp to the filename, so that we can distinguish between different backup files and easily identify when those backups were taken.

We have already seen how we can produce a timestamp separately using the date command. Fortunately, bash supports something called command substitution, which allows you to embed the output of a command (in this case date) in another command (in this case tar). It’s done by enclosing the first command between $( and ):

$ tar -C ~ -zcvf ~/backups/Documents-$(date +"%Y.%m.%d-%H.%M.%S").tar.gz Documents

As you can see, this produces the intended effect:

Adding the Hostname

If you have multiple computers, you might want to back up the same folder (e.g. Desktop) on all of them, but still be able to distinguish between them. For this we can again use command substitution to insert the hostname in the output filename. This can be done either with the hostname command or using uname -n:

$ tar -C ~ -zcvf ~/backups/Desktop-$(uname -n)-$(date +"%Y.%m.%d-%H.%M.%S").tar.gz Desktop

Creating Shell Scripts

Although the backup commands we have seen are one-liners, chances are that you don’t want to type them in again every time you want to take a backup. For this, you can create a file with a .sh extension (e.g. backup-desktop.sh) and put the command in there.

However, if you try to run it (always preceded by a ./), it doesn’t actually work:

The directory listing in the above screenshot shows why this is happening. The script is missing execution permissions, which we can add using the chmod command:

$ chmod 755 backup-desktop.sh

In short, that 755 value is a representation of permissions on a file which grants execution rights to everybody (see Chmod Calculator or run man chmod for more information on how this works).

A directory listing now shows that the file has execution (x) permissions, and the terminal actually highlights the file in bold green to show that it is executable:

Running Multiple Scripts

As we start to back up more folders, we gradually end up with more of these tar commands. While we could put them one after another in the same file, this would mean that we always backup everything, and we thus lose the flexibility to back up individual folders independently (handy when the folders are large and backups take a while).

For this, it’s useful to keep separate scripts for each folder, but then have one script to run them all (see what I did there?).

There are many ways to run multiple commands in a bash script, including simply putting them on separate lines one after another, or using one of several operators designed to allow multiple commands to be run on the same line.

While the choice of which to use is mostly arbitrary, there are subtle differences in how they work. I tend to favour the && operator because it will not carry out further commands if one has failed, reducing the risk that a failure goes unnoticed:

$ ./backup-desktop.sh && ./backup-documents.sh

Putting this in a file called backup-all.sh and executing it has the effect of backing up both the Desktop and the Documents folders. If I only want to backup one of them, I can still do that using the corresponding shell script.

Conclusion

We have thus far seen how to:

  • Package a folder in a compressed file
  • Include a timestamp in the filename
  • Include the hostname in the filename
  • Create bash scripts for these commands
  • Combine multiple shell scripts into a single one

This helps greatly in automating the execution of backups, but you can always take it further. Here are some ideas:

  • Use cron to take periodic backups (e.g. daily or weekly)
  • Transfer the backup files off your computer – this really depends what you are comfortable with, as it could be an external hard drive, or another Linux machine (e.g. using scp), or a cloud service like AWS S3.