All posts by Gigi

Managing ASP .NET Core Settings in Multiple Environments

One of the many benefits offered by .NET Core over its predecessor (the older .NET Framework) is the way it handles configuration. I wrote about the capabilities of .NET Core configuration back when the new framework was still prerelease and using a different name (see ASP .NET 5 Application Configuration, Part 1 and Part 2), and although some APIs may have changed, most of it should still be relevant today.

It is easy to set up application settings with this recent configuration model. However, when it comes to actually deploying an application into different environments (e.g. development, staging, production, and possibly others), things become complicated. How do we maintain configurations for all these environments, and how do we save ourselves from the tedious and error-prone practice of manually tweaking individual settings on all these different servers? How do we make sure we don’t lose these settings outright if a server experiences a technical failure? These challenges have nothing to do with the configuration model itself, as they are a more general administrative burden.

One option is to use something like Octopus Deploy to store settings for different environments and transform a settings file (such as appsettings.json) at deployment time. However, not everybody has this luxury. In this article, we will see how we can manage configurations for multiple environments using features that .NET Core offers out of the box.

At the time of writing this article, .NET Core 3.1.1 is the latest version.

Stacking Configurations in .NET Core

The .NET Core configuration libraries allow you to combine application settings from different sources, even if these are of different types (e.g. JSON, XML, environment variables, etc). Imagine I have these two JSON files, named appsettings1.json and appsettings2.json:

{
    "ApplicationName": "Some fancy app",
    "Timeout": 5000
}
{
    "Timeout": 3000,
    "ConnectionString": "some connection string"
}

In order to read these files, I’ll need to install the following package:

dotnet add package Microsoft.Extensions.Configuration.Json

We can then use the ConfigurationBuilder to read in both files, combine them, and give us back an IConfigurationRoot object that allows the application code to query the settings that were read:

using System;
using Microsoft.Extensions.Configuration;

namespace netcoreconf1
{
    class Program
    {
        static void Main(string[] args)
        {
            var config = new ConfigurationBuilder()
                .AddJsonFile("appsettings1.json")
                .AddJsonFile("appsettings2.json")
                .Build();

            Console.WriteLine("Application Name: " + config["ApplicationName"]);
            Console.WriteLine("Timeout:          " + config["Timeout"]);
            Console.WriteLine("Connection String " + config["ConnectionString"]);

            Console.ReadLine();
        }
    }
}

After ensuring that the two JSON files are set to copy to the output directory on build, we can run the simple application to see the result:

The Application Name setting comes from the first JSON file, while the Connection String setting comes from the second. The Timeout setting, on the other hand, exists in both files, but the value was obtained from the second JSON file. In fact, the order in which configuration sources are read is important, and by design, settings read from later sources will overwrite the same settings read from earlier sources.

It follows from this that if we have some variable that defines which environment (e.g. Production) we’re in, then we can do something like this:

            const string environment = "Production";

            var config = new ConfigurationBuilder()
                .AddJsonFile("appsettings.json")
                .AddJsonFile($"appsettings.{environment}.json", optional: true)
                .Build();

In this case we have a core JSON file with the settings that tend to be common across environments, and then we have one or more JSON files specific to the environment that we’re running in, such as appsettings.Development.json or appsettings.Production.json. The settings in the environment-specific JSON file will overwrite those in the core appsettings.json file.

You will notice that we have that optional: true parameter for the environment-specific JSON file. This means that if that file is not found, the ConfigurationBuilder will simply ignore it instead of throwing an exception. This is the default behaviour in ASP .NET Core, which we will explore in the next section. It is debatable whether this is a good idea, because it may be perfectly reasonable to prefer the application to crash rather than run with incorrect configuration settings.

Multiple Environments in ASP .NET Core Using Visual Studio

By default, ASP .NET Core web applications use this same mechanism to combine a core appsettings.json file with an environment-specific appsettings.Environment.json file.

In the previous section we used a constant to supply the name of the current environment. Instead, ASP .NET Core uses an environment variable named ASPNETCORE_ENVIRONMENT to determine the environment.

Let’s create an ASP .NET Core Web API using Visual Studio and run it to see this in action:

Somehow, ASP .NET Core figured out that we’re using the Development environment without us setting anything up. How does it know?

You’ll find the answer in the launchSettings.json file (under Properties in Solution Explorer), which defines the aforementioned environment variable when the application is run either directly or using IIS Express. You’ll also find that there are already separate appsettings.json and appsettings.Development.json files where you can put your settings.

If you remove this environment variable and re-run the application, you’ll find that the default environment is Production.

On the other hand, if we add a different appsettings.Staging.json, and update the environment variable to Staging, then we can run locally while pointing to the Staging environment:

Naturally, connecting locally to different environments isn’t something you should take lightly. Make sure you know what you’re doing, as you can do some real damage on production environments. On the other hand, there are times when this may be necessary, so it is a simple and powerful technique. Just be careful.

“With great power comes great responsibility.”

— Uncle Ben, Spider-Man (2002)

Multiple Environments in Console Apps

While ASP .NET Core handles the configuration plumbing for us, we do not have this luxury in other types of applications. Console apps, for instance those built to run as Windows Services using Topshelf, will need to have this behaviour as part of their code.

In a new console application, we will first need to add the relevant NuGet package:

dotnet add package Microsoft.Extensions.Configuration.Json

Then we can set up a ConfigurationBuilder to read JSON configuration files using the same stacked approach described earlier:

            var config = new ConfigurationBuilder()
                .AddJsonFile("appsettings.json")
                .AddJsonFile($"appsettings.{environment}.json")
                .Build();

We can read the environment from the same ASPNETCORE_ENVIRONMENT environment variable that ASP .NET Core looks for. This way, if we have several applications on a server, they can all determine the environment from the same machine-wide setting.

            string environment = Environment.GetEnvironmentVariable("ASPNETCORE_ENVIRONMENT");

            if (environment != null)
            {
                var config = new ConfigurationBuilder()
                    .AddJsonFile("appsettings.json")
                    .AddJsonFile($"appsettings.{environment}.json")
                    .Build();

                // TODO application logic goes here
            }
            else
            {
                Console.WriteLine("Fatal error: environment not found!");
                Environment.Exit(-1);
            }

If we run the application now, we will get that fatal error. That’s because we haven’t actually set up the environment variable yet. See .NET Core Tools Telemetry for instructions on how to permanently set an environment variable on Windows or Linux. Avoid doing this via a terminal or command line window since that setting would only apply to that particular window. I’m doing this in the screenshot below only as a quick demonstration, since I don’t need to maintain this application.

Deploying an ASP .NET Core Web Application to a Windows Server

When developing applications locally, we have a lot of tools that make our lives easy thanks to whichever IDE we use (e.g. Visual Studio or Visual Studio Code). Deploying to a server is different, because we need to set everything up ourselves.

The first thing to do is install .NET Core on the machine. Download the ASP .NET Core Hosting Bundle as shown in the screenshot below. This includes the Runtime (which allows you to run an .exe built with .NET Core) and the ASP .NET Core Module v2 for IIS (which enables you to host ASP .NET Core web applications in IIS). However, it does not include the SDK, so you will not be able to use any of the command-line dotnet tools, and even dotnet --version will not let you know whether it is set up correctly.

Next, we can set up a couple of system environment variables:

The first is ASPNETCORE_ENVIRONMENT which has already been explained ad nauseam earlier in this article. The second is DOTNET_CLI_TELEMETRY_OPTOUT (see .NET Core Tools Telemetry), which can optionally be used to avoid sending usage data to Microsoft since this behaviour is turned on by default.

Another optional preparation step that applies to web applications is to add health checks. This simply means exposing an unprotected endpoint which returns something like “OK”. It is useful to check whether you can reach the web application at a basic level (while eliminating complexities such as authentication), and it can also be used by load balancers to monitor the health of applications. This can be implemented either directly in code, or using ASP .NET Core’s own health checks feature.

Finally, you really should set up logging to file, and log the environment as soon as the application starts. Since ASP .NET Core does not have a file logger out of the box, you can use third party libraries such as NLog or Serilog. Like this, if the application picks up the wrong environment, you can realise very quickly. The log files will also help you monitor the health of your application and troubleshoot issues. Use tools such as baretail to monitor logs locally on the server, or ship them to a central store where you can analyse them in more detail.

With everything prepared, we can publish our web application:

dotnet publish -c Release -r win10-x64

All that is left is to copy the files over to the server (compressing and decompressing them in the process) and run the application.

The above screenshot shows the deployed ASP .NET Core web application running, serving requests, and picking up the correct configuration. All this works despite not having the .NET Core SDK installed, because it is not required simply to run applications.

Deploying an ASP .NET Core Web Application to IIS

In order to host an ASP .NET Core web application in IIS, the instructions in the previous section apply, but there are a few more things to do.

First, if the server does not already have IIS, then it needs to be installed. This can be done by going to:

  1. Server Manager
  2. Add roles and features
  3. Next
  4. Next
  5. Next
  6. Select Web Server (IIS) as shown in the screenshot below.
  7. Click Add Features in the modal that comes up.
  8. Next
  9. Next
  10. Next
  11. Next
  12. Install

In IIS, make sure you have the AspNetCoreModuleV2 module, by clicking on the machine node in the Connections panel (left) and then double-clicking Modules. If you installed IIS after having installed the ASP .NET Core Hosting Bundle, you will need to run the latter installer again (just hit Repair).

Next, go into IIS and set up a website, with the path pointing to the directory where you put the web application’s published files:

Start your website, and then visit the test endpoint. Since you don’t have a console window when running under IIS, the log files come in really handy. We can use them to check that we are loading configuration for the right environment just as before:

It’s working great, and it seems like from .NET Core 3+, it even logs the hosting environment automatically so you don’t need to do that yourself.

Troubleshooting

When running under IIS, an ASP .NET Core application needs a web.config file just like any other. While I’ve had to add this manually in the past, it seems like they are now being created automatically when you publish. If, for any reason, you’re missing a web.config file, you can grab the example in the docs.

I ran into a problem with an IIS-hosted application under .NET Core 2.2 where the environment variable defining the hosting environment wasn’t being picked up correctly by ASP .NET Core. As a workaround, it is actually possible to set environment variables directly in web.config, and they will be passed by IIS to the hosted application.

On the other hand, when running .NET Core applications under Linux, keep in mind that files are case sensitive. Andrew Lock has written about a problem he ran into because of this.

Summary

In this article, we have seen that the old way of transforming config files is no longer necessary. By stacking configuration files, we can have a core appsettings.json file whose settings is overwritten by other environment-specific JSON files.

This setup is done automatically in ASP .NET Core applications, using the ASPNETCORE_ENVIRONMENT environment variable to determine the current environment. In other types of apps, we can read the same environment variable manually to achieve the same effect. Under Visual Studio, this environment variable can easily be changed in launchsettings.json to work under different environments, as long as the necessary level of care is taken.

Deployment of ASP .NET Core applications requires the .NET Core Runtime to be installed on the target server. The ASP .NET Core Hosting Bundle includes this as well as support for hosting ASP .NET Core applications under IIS. The SDK is not required unless the dotnet command-line tools need to be used on the server.

Before deploying, the server should also have the right environment variables, and the application should be fitted with mechanisms to easily check that it is working properly (such as an open endpoint and log files).

Windows Server Sessions Consume Resources

If you work in a Windows environment, you have most likely had to log onto a Windows Server machine. Windows Server is used to host web applications (via IIS), manage a corporate network, and so much more.

Every time you log onto Windows Server, your profile is actually using a portion of the resources (e.g. CPU, RAM, disk and network utilisation) on that machine. It should not sound like a surprise that while some resources are used just to keep you logged on, the more processes you are running, the more resources you are using. So keeping things like Firefox or SQL Server Management Studio open can consume a significant portion of the server’s memory.

While it is understandable to log onto a server and utilise system resources for maintenance, troubleshooting or deployment purposes, many people do not realise that these resources are not released once they disconnect from the server. In fact, when you try to close a Remote Desktop Connection from the blue bar at the top, you get a warning that tells you this:

We can confirm this by opening the Users tab in the Task Manager, and see that logged in users who have disconnected are still using up a lot of memory (and other resources):

It is interesting to note that Sherry and Smith each have just Firefox open, with 3 or 4 tabs. You can imagine what the impact would be if there were more users each with more applications running.

In order to free up resources when you’re done working on the server, simply Sign Out instead of just disconnecting:

Once users have signed out, you’ll see that those disconnected sessions have disappeared from the Users tab in Task Manager, and their respective resources have been freed:

So there you go: this little tip can help you make the most of your server simply by not being wasteful of system resources.

Scripting Backups with bash on Linux

Over the years, Linux has gone from being something exclusively for the hardcore tech-savvy to something accessible to all. Modern GUIs provide a friendly means for anybody to interact with the operating system, without needing to use the terminal.

However, the terminal remains one of the areas where Linux shines, and is a great way for power users to automate routine tasks. Taking backups is one such mundane and repetitive activity which can very easily be scripted using shell scripts. In this article, we’ll see how to do this using the bash shell, which is found in the more popular distributions such as Ubuntu.

Simple Folder Backup with Timestamp

Picture this: we have a couple of documents in our Documents folder, and we’d like to back up the entire folder and append a timestamp in the name. The output filename would look something like:

Documents-2020.01.21-17.25.tar.gz

The .tar.gz format is a common way to compress a collection of files on Linux. gzip (the .gz part) is a powerful compression algorithm, but it can only compress a single file, and not an entire folder. Therefore, a folder is typically packaged into a single .tar file (historically standing for “tape archive”). This tarball, as it’s sometimes called, is then compressed using gzip to produce the resulting .tar.gz file.

The timestamp format above is something arbitrary, but I’ve been using this myself for many years and it has a number of advantages:

  1. It loosely follows the ISO 8601 format, starting with the year, and thus avoiding confusion between British (DD/MM) and American (MM/DD) date notation.
  2. For the same reason, it allows easy time-based sorting of backup files within a directory.
  3. The use of dots and dashes comes in handy when the filename is too long and is wrapped onto multiple lines. Related parts (e.g. the year, month and day of the date) stick together, but separate parts (e.g. the date and time) can be broken onto separate lines. Also, this takes up less (screen) space than other methods (e.g. using underscores).

To generate such a timestamp, we can use the Linux date command:

$ date
Tue 21 Jan 2020 05:35:34 PM UTC

That gives us the date, but it’s not in the format we want. For that, we need to give it a parameter with the format string:

$ date +"%Y.%m.%d-%H.%M.%S"
2020.01.21-17.37.14

Much better! We’ll use this shortly.

Let’s create a backups folder in the home directory as a destination folder for our backups:

$ mkdir ~/backups

We can then use the tar command to bundle our Documents folder into a single file (-c is to create, and -f is for file archive):

$ tar -cf ~/backups/Documents.tar ~/Documents

This works well as long as you run it from the home directory. But if you run it from somewhere else, you will see an unexpected structure within the resulting .tar file:

Instead of putting the Documents folder at the top level of the .tar file, it seems to have replicated the folder structure leading up to it from the root folder. We can get around this problem by using the -C switch and specifying that it should run from the home folder:

$ tar -C ~ -cf ~/backups/Documents.tar Documents

This solves the problem, as you can see:

Now that we’ve packaged the Documents folder into a .tar file, let’s compress it. We could do this using the gzip command, but as it turns out, the tar command itself has a -z switch that produces a gzipped tarball. After adding that and a -v (verbose) switch, and changing the extension, we end up with:

$ tar -C ~ -zcvf ~/backups/Documents.tar.gz Documents
Documents/
Documents/some-document.odt
Documents/tasks.txt

All we have left is to add a timestamp to the filename, so that we can distinguish between different backup files and easily identify when those backups were taken.

We have already seen how we can produce a timestamp separately using the date command. Fortunately, bash supports something called command substitution, which allows you to embed the output of a command (in this case date) in another command (in this case tar). It’s done by enclosing the first command between $( and ):

$ tar -C ~ -zcvf ~/backups/Documents-$(date +"%Y.%m.%d-%H.%M.%S").tar.gz Documents

As you can see, this produces the intended effect:

Adding the Hostname

If you have multiple computers, you might want to back up the same folder (e.g. Desktop) on all of them, but still be able to distinguish between them. For this we can again use command substitution to insert the hostname in the output filename. This can be done either with the hostname command or using uname -n:

$ tar -C ~ -zcvf ~/backups/Desktop-$(uname -n)-$(date +"%Y.%m.%d-%H.%M.%S").tar.gz Desktop

Creating Shell Scripts

Although the backup commands we have seen are one-liners, chances are that you don’t want to type them in again every time you want to take a backup. For this, you can create a file with a .sh extension (e.g. backup-desktop.sh) and put the command in there.

However, if you try to run it (always preceded by a ./), it doesn’t actually work:

The directory listing in the above screenshot shows why this is happening. The script is missing execution permissions, which we can add using the chmod command:

$ chmod 755 backup-desktop.sh

In short, that 755 value is a representation of permissions on a file which grants execution rights to everybody (see Chmod Calculator or run man chmod for more information on how this works).

A directory listing now shows that the file has execution (x) permissions, and the terminal actually highlights the file in bold green to show that it is executable:

Running Multiple Scripts

As we start to back up more folders, we gradually end up with more of these tar commands. While we could put them one after another in the same file, this would mean that we always backup everything, and we thus lose the flexibility to back up individual folders independently (handy when the folders are large and backups take a while).

For this, it’s useful to keep separate scripts for each folder, but then have one script to run them all (see what I did there?).

There are many ways to run multiple commands in a bash script, including simply putting them on separate lines one after another, or using one of several operators designed to allow multiple commands to be run on the same line.

While the choice of which to use is mostly arbitrary, there are subtle differences in how they work. I tend to favour the && operator because it will not carry out further commands if one has failed, reducing the risk that a failure goes unnoticed:

$ ./backup-desktop.sh && ./backup-documents.sh

Putting this in a file called backup-all.sh and executing it has the effect of backing up both the Desktop and the Documents folders. If I only want to backup one of them, I can still do that using the corresponding shell script.

Conclusion

We have thus far seen how to:

  • Package a folder in a compressed file
  • Include a timestamp in the filename
  • Include the hostname in the filename
  • Create bash scripts for these commands
  • Combine multiple shell scripts into a single one

This helps greatly in automating the execution of backups, but you can always take it further. Here are some ideas:

  • Use cron to take periodic backups (e.g. daily or weekly)
  • Transfer the backup files off your computer – this really depends what you are comfortable with, as it could be an external hard drive, or another Linux machine (e.g. using scp), or a cloud service like AWS S3.

Removing the Server Header in ASP .NET Core

There are many aspects to web security, but in this article we’ll focus on one in particular. Attackers can use any available information about a target web application to their advantage. Therefore, if your web application is sending out headers revealing the underlying infrastructure of your web application, attackers can use those details to narrow down their attack and attempt to exploit vulnerabilities in that particular software.

Let’s create a new ASP .NET Core web application to see what is returned in the headers by default:

mkdir dotnet-server-header
dotnet new web
dotnet run

This creates a “Hello world” ASP .NET Core application using the “ASP .NET Core Empty” template, and runs it. By default it runs on Kestrel on port 5000. If we access it in a browser and check the response headers (e.g. using the Network tab of the Chrome Developer Tools), we see that there’s this Server header with a value of Kestrel. If it were running under IIS, this value might have been MicrosoftIIS/10.0 instead.

Honestly, this could be worse. Older versions of ASP .NET running on the old .NET Framework used to add X-Powered-By, X-AspNet-Version and X-AspNet-MvcVersion headers with very specific information about the underlying software. While this information can be really useful for statistical purposes (e.g. to identify the most popular web servers, or to identify how prevalent different versions of ASP .NET are), they are also very useful for malicious purposes (e.g. to look for known vulnerabilities in a specific ASP .NET version).

ASP .NET Core, on the other hand, only adds the Server header, which is quite broad. However, the less information we give a potential attacker, the better for us.

There is no harm in removing the Server header, and to do this in ASP .NET Core, we can take a tip from this Stack Overflow answer:

        public static IHostBuilder CreateHostBuilder(string[] args) =>
            Host.CreateDefaultBuilder(args)
                .ConfigureWebHostDefaults(webBuilder =>
                {
                    webBuilder.UseStartup<Startup>()
                              .UseKestrel(options => options.AddServerHeader = false);
                });

The highlighted line above, added to Program.cs, has the effect of getting rid of that Server header. In fact, if we dotnet run again now, we find that it is gone:

It is always a good idea to do a vulnerability assessment of your web application, and in doing so, remove any excess information that complete strangers do not need to know. What we have seen here is a very small change that can reduce the security risk at least by a little.

Getting Started with React

React is a modern JavaScript library for building UI components. In this article, we will go through the steps needed to set up and run a React project. While there is much to be said about React, we will not really delve into theory here as the intention is to get up and running quickly.

Creating a Project

Like other web frontend libraries and frameworks, React requires npm. Therefore, the first thing to do is make sure you have Node.js installed, and if that is the case, then you should already have npm. You can use the following commands to check the version of each (making sure they are installed):

node -v
npm -v

We can then use Create React App to create our React project. Simply run the following command in your terminal:

npx create-react-app my-first-react-app

This will download the latest version of create-react-app automatically if it’s not already available. It takes a couple of minutes to run, and at the end, you will have a directory called my-first-react-app with a basic React project template inside it.

The output at the end (shown above) tells you about the directory that was created, and gives you a few basic commands to get started. In fact, we’ll use the last of those to fire up our web application:

cd my-first-react-app/
npm start

This will open a browser window or tab at the endpoint where the web application is running, which is localhost:3000 by default, and you should see the example page generated by Create React App, with a spinning React logo in it:

Open the project folder using your favourite text editor (e.g. Visual Studio Code), and you can see the project structure, as well as the App.js file which represents the current page:

With the web application still running, replace the highlighted line above (or any other you prefer) with “Hello world”. When you save, the running web application will automatically reload to reflect your changes:

And that’s all! As you can see, it’s quite easy (even if perhaps time-consuming) to create a React project. It’s also easy to run it, and since the project files are being watched, the running application is reloaded every time you save, making it very fast and efficient to make quick development iterations.

You are probably still wondering what React is, what you can do with it, and how/why there is markup within JavaScript! Those, my friend, are topics for another day. 🙂