Morse Code Converter Using Dictionaries

This elementary article was originally posted as “C# Basics: Morse Code Converter Using Dictionaries” at Programmer’s Ranch, and was based on Visual Studio 2010. This updated version uses Visual Studio 2017; all screenshots as well as the intro and conclusion have been changed. The source code is now available at the Gigi Labs BitBucket repository.

Today, we’re going to write a little program that converts regular English characters and words into Morse Code, so each character will be represented by a series of dots and/or dashes. This article is mainly targeted at beginners and the goal is to show how dictionaries work.

We’ll start off by creating a console application. After going File -> New Project… in Visual Studio, select the Console App (.NET Framework) project type, and select a name and location for it. In Visual Studio 2017, you’ll find other options for console applications, such as Console App (.NET Core). While this simple tutorial should still work, we’re going to stick to the more traditional and familiar project type to avoid confusion.

In C#, we can use a dictionary to map keys (e.g. 'L') to values (e.g. ".-.."). In other programming languages, dictionaries are sometimes called hash tables or maps or associative arrays. The following is an example of a dictionary mapping the first two letters of the alphabet to their Morse equivalents:

            Dictionary<char, String> morse = new Dictionary<char, string>();
            morse.Add('A', ".-");
            morse.Add('B', "-...");

            Console.WriteLine(morse['A']);
            Console.WriteLine(morse['B']);

            Console.WriteLine("Press any key...");
            Console.ReadKey(false);

First, we are declaring a dictionary. A dictionary is a generic type, so we need to tell in the <> part which data types we are storing. In this case, we have a char key and a string value. We can then add various items, supplying the key and value to the Add() method. Finally, we get values just like we would access an array: using the [] syntax. Just that dictionaries aren’t restricted to using integers as keys; you can use any data type you like. Note: you’ll know from the earlier article, “The ASCII Table (C#)“, that a character can be directly converted to an integer. Dictionaries work just as well if you use other data types, such as strings.

Here is the output:

If you try to access a key that doesn’t exist, such as morse['C'], you’ll get a KeyNotFoundException. You can check whether a key exists using ContainsKey():

            if (morse.ContainsKey('C'))
                Console.WriteLine(morse['C']);

OK. Before we build our Morse converter, you should know that there are several ways of populating a dictionary. One is the Add() method we have seen above. Another is to assign values directly:

            morse['A'] = ".-";
            morse['B'] = "-...";

You can also use collection initialiser syntax to set several values at once:

            Dictionary<char, String> morse = new Dictionary<char, String>()
            {
                {'A' , ".-"},
                {'B' , "-..."}
            };

Since we only need to set the Morse mapping once, I’m going to use this method. Don’t forget the semicolon at the end! Replace your current code with the following:

            Dictionary<char, String> morse = new Dictionary<char, String>()
            {
                {'A' , ".-"},
                {'B' , "-..."},
                {'C' , "-.-."},
                {'D' , "-.."},
                {'E' , "."},
                {'F' , "..-."},
                {'G' , "--."},
                {'H' , "...."},
                {'I' , ".."},
                {'J' , ".---"},
                {'K' , "-.-"},
                {'L' , ".-.."},
                {'M' , "--"},
                {'N' , "-."},
                {'O' , "---"},
                {'P' , ".--."},
                {'Q' , "--.-"},
                {'R' , ".-."},
                {'S' , "..."},
                {'T' , "-"},
                {'U' , "..-"},
                {'V' , "...-"},
                {'W' , ".--"},
                {'X' , "-..-"},
                {'Y' , "-.--"},
                {'Z' , "--.."},
                {'0' , "-----"},
                {'1' , ".----"},
                {'2' , "..---"},
                {'3' , "...--"},
                {'4' , "....-"},
                {'5' , "....."},
                {'6' , "-...."},
                {'7' , "--..."},
                {'8' , "---.."},
                {'9' , "----."},
            };

           

            Console.WriteLine("Press any key...");
            Console.ReadKey(false);

In the empty space between the dictionary and the Console.WriteLine(), we can now accept user input and convert it to Morse:

            Console.WriteLine("Write something:");
            String input = Console.ReadLine();
            input = input.ToUpper();

            for (int i = 0; i < input.Length; i++)
            {
                if (i > 0)
                    Console.Write('/');

                char c = input[i];
                if (morse.ContainsKey(c))
                    Console.Write(morse);
            }

            Console.WriteLine();

Here, the user writes something and it is stored in the input variable. We then convert this to uppercase because the keys in our dictionary are uppercase. Then we loop over each character in the input string, and write its Morse equivalent if it exists. We separate different characters in the Morse output by a forward slash (/). Here’s the output:

Awesome! 🙂 In this article we used Visual Studio to create a program that converts alphanumeric text into the Morse-encoded equivalent, while learning to use dictionaries in the process.

Which .NET Standard Version To Target

When I migrated Dandago.Finance to .NET Core yesterday, there was something I overlooked. I realised this when I tried to install the resulting package, targeting .NET Standard 1.6, in a new project. It worked fine in a .NET Core console application, but not in one targeting the full .NET Framework:

In fact, even referencing Dandago.Finance directly results in weird stuff going on:

The problem is immediately evident if we take a look at the compatibility grid for .NET Standard, a relevant excerpt of which at the time of writing this article is the following:

Targeting each version of .NET Standard means supporting the corresponding versions of .NET Core and .NET Framework upwards. For instance, if we target .NET Standard 1.4, then we support .NET Framework 4.6.1 and up, and .NET Core 1.0 and up.

But since Dandago.Finance was built to target .NET Standard 1.6, then .NET Framework 4.6.2 and earlier could not use it (since the first version it supports is “vNext”, whatever that means in this context).

So in practice, in order to maximise a library’s compatibility, you will want to target the lowest possible version of .NET Standard. You can do this by changing the target framework from the project settings:

In the case of Dandago.Finance, .NET Standard 1.1 provided insufficient API coverage to make it work:

Targeting .NET Standard 1.2 made Dandago.Finance compile just fine, and I verified that the resulting package installs fine for console applications targeting .NET Framework 4.5.1 and up (as per compatibility chart), and .NET Core 1.0 and up.

However, this means we have had to sacrifice support for .NET Framework 4.5. This is no big deal since .NET Framework versions 4, 4.5 and 4.5.1 have been dead for over a year now. So technically we could have targeted .NET Standard 1.3 (.NET Framework 4.6 and upwards), but it’s good to give extra backwards compatibility for legacy code where we can.

Migrating Dandago.Finance to .NET Core

Microsoft has recently been heavily investing in .NET Core, which you can think of as the next generation of the .NET Framework. There are various benefits to .NET Core, the biggest one being that it is cross-platform; thus compliant code can run on Windows, Linux and Mac (and probably others in future).

In this article, we’re going to take one of my smaller projects – Dandago.Finance – and port it to .NET Core. Dandago.Finance is ideal to demonstrate a first migration because it is very small, consisting of a main project (3 classes) and a unit test project (2 classes) – both class libraries.

Before we start, make sure you are using the latest tools (such as the recently released Visual Studio 2017). .NET Core tools have undergone a lot of radical changes (e.g. project.json is dead) so you don’t want to be learning based on something that’s already obsolete. If you’re using VS2017, make sure you have the .NET Core cross-platform development workload installed.

Migrating the main library

We’re going to start a fresh new class library targeting .NET Core and move our code there. Actually, that statement is not entirely correct: if you open Visual Studio 2017, you’ll see that there are at least 3 different kinds of class library you can create (or more depending on additional tooling you may have installed):

  • Class Library (.NET Framework)
  • Class Library (.NET Core)
  • Class Library (.NET Standard)

This is very confusing and I’ve asked a question about this on Stack Overflow yesterday that attracted some pretty detailed answers. In short, if you want your class libraries to be as portable as possible, you need to target .NET Standard. .NET Standard is a specification detailing APIs that need to be available in compatible frameworks. .NET Core, and certain versions of the full .NET Framework, implement .NET Standard. However, they each also incorporate a lot of other runtime-related stuff, so targeting .NET Core specifically means you can’t use your code under the full .NET Framework.

So let’s create a project of type Class Library (.NET Standard). As always, this will create a solution with the same name as the project.

Next, we’ll delete the automatically created Class1 class, and copy the class files from the old Dandago.Finance library to the new project folder. You’ll notice that Visual Studio automatically notices the new files and includes them in the project, without you needing to explicitly add them:

 

Migrating the test project

Let’s add a new class library for the unit tests, but this time it needs to be a Class Library (.NET Core). If you get this wrong and choose Class Library (.NET Standard) instead, Visual Studio won’t find your tests and the dotnet test command will refuse to run it (as per this Stack Overflow question). The reason why .NET Standard won’t work for unit tests is detailed in the corresponding answer: in short, we need to specify a target framework that will be responsible for running the tests; .NET Standard on its own is not enough.

Next, we need to add a reference to the Dandago.Finance project.

Now, we can repeat the procedure we did for the main library, and delete Class1.cs and copy over the test classes.

However, this isn’t going to be as smooth as with the main library. The original test project uses NUnit, and at the time of writing, that isn’t fully supported by .NET Core. Fortunately, however, it’s easy to change to xUnit, which does already boast .NET Core support.

First, we need to install the following packages:

Install-Package Microsoft.NET.Test.Sdk
Install-Package xunit
Install-Package xunit.runner.visualstudio

Then, we need to make the following substitutions:

  1. using NUnit.Framework; becomes using Xunit;
  2. [TestFixture] goes away
  3. [Test] becomes [Fact]
  4. Assert.IsTrue(...) becomes Assert.True(...)
  5. Assert.IsFalse(...) becomes Assert.False(...)

The solution should now build, and the unit tests should run successfully:

Summary

Migrating Dandago.Finance to .NET Core has taught us a few things:

  1. Visual Studio can automatically detect new files for .NET Core / .NET Standard projects.
  2. Portable class libraries should target .NET Standard.
  3. Unit test projects should target .NET Core.
  4. Use xUnit for .NET Core unit tests.

Lost in Cyberspace in February 2017

This article continues the series started with “The Sorry State of the Web in 2016“, showing various careless and irresponsible blunders on live websites.

Virtu Ferries

A friend reported that the website for Virtu Ferries accepts credit card details over a non-HTTPS connection, specifically when you create a new booking. When I went in and checked, I confirmed this, but also found a number of other issues.

We can start off with a validation error that appears in an orange box in Italian, even though we are using the English version of the website:

Then, we can see how this website really does accept credit card details over an HTTP (as opposed to HTTPS) connection:

This is similar to Lifelong Learning (refer to “The Sorry State of the Web in 2016” for details on that case and why it is bad) in that it uses an HTTPS iframe within a website served over plain and unencrypted HTTP. I have since confirmed that this practice is actually illegal in Malta, as it violates the requirements of the Data Protection Act in terms of secure transmission of data.

Given that the website accepts credit card details over an insecure connection, you obviously wouldn’t expect it to do any better with login forms and passwords:

If you take long to complete the booking, your transaction times out, and you are asked to “Press Advance to Retry”:

 

But when you do actually press the Advance button, you get a nice big ASP .NET error:

This is really bad because not only is the website broken, but any errors are actually visible from outside the server, as you can see above. This exposes details about what the code is doing (from the stack trace), third party libraries in use (Transactium in this case), and .NET Framework and ASP .NET versions. This is a serious security problem because it gives potential attackers a lot of information that they can use to look for flaws in the web application or the underlying infrastructure.

Lost in Cyberspace

At the bottom of the Virtu Ferries website, you’ll find that it was developed by Cyberspace Solutions Ltd. By doing a quick Google search, we can find a lot of other websites they made that have serious problems, mainly related to insecure transmission of credentials over the internet.

For example, BHS, with its insecure login form:

Same thing for C. Camilleri & Sons Ltd.:

And for Sound Machine:

The Better Regulation Unit displays a big fancy padlock next to the link where you access a supposed “Protected Area”:

…but in reality, the WordPress login form that it leads you to is no more secure than the rest of the site (so much for better regulation):

Malta Dockers Union: same problem with an insecure login form:

Malta Yachting (the one with the .mt at the end) has a less serious and more embarrassing problem. If you actually click on the link that is supposed to take you back to the Cyberspace Solutions website, you find that they can’t even spell their company name right, AND they forgot the http:// part in their link, making it relative:

Another of Cyberspace Solutions’ websites is Research Trust Malta. From the Google search results of websites developed by Cyberspace, you could already see that it had been hacked, in fact:

 

Investing in research indeed. This has since been fixed, so perhaps they are investing in better web developers instead.

This is quite impressive: all this mess has come from a single web development company. It really is true that you can make a lot of money from low quality work, so I kind of understand now why most software companies I know about just love to cut corners.

ooii

ooii.com.mt, a website that sells tickets for local events, has the same problem of accepting login information over an insecure connection.

I haven’t been able to check whether they accept credit card information in the same way, since they’ve had no upcoming events for months.

Tallinja

Similar to many airlines, Malta Public Transport doesn’t like apostrophes in surnames when you apply for a tallinja card:

In fact, they are contesting the validity of the name I was born with, that is on all my official identification documents:

Summary

This article was focused mainly on websites by Cyberspace Solutions Ltd, not because I have anything against them but because they alone have created so many websites with serious security problems, some of which verge on being illegal.

You might make a lot of money by creating quick and dirty websites, but that will soon catch up with you in terms of:

  • Damage to your reputation, threatening the continuity of your business.
  • The cost of having to deal with support (e.g. when the blog you set up gets hacked).
  • Getting sued by customers when something serious happens to the website, or by their clients when someone leaks out their personal data.
  • Legal action from authorities due to non-compliance with data protection legislation.

How To Be An Asshole, By Example

Denis Leary came up with some really creative ways to be an asshole back in 1993. However, nowadays we have more modern ways to piss people off, as I discovered from some recent encounters. I bet Denis wasn’t expecting any of these when he wrote that song.

LA Metro

It’s not enough for the Los Angeles Metro system to be completely unreliable in terms of punctuality or operation. They even have to confuse people by having trains appear on the wrong track. In the photo above, the train to Union Station should be on the track to the left, but it just arrived on the track on the right, which is supposed to be destined for North Hollywood.

“Microsoft Edge is faster than Chrome”

Long after Microsoft was forced to give Windows users a decent choice of browsers (because shipping Internet Explorer with Windows is the only thing that gave such a hopeless browser a leading position in the market for so many years), it is still pulling dirty tricks to try and promote adoption of its web browsers. In this screenshot sent in by a friend, we can see how Windows 10 pathetically tries to win Chrome users over to Microsoft’s more recent Edge browser, saying that “Microsoft Edge is faster than Chrome”.

Similar popups include “Microsoft Edge is safer than Firefox” and “Chrome is draining your battery faster”.

I’ve seen these kinds of filthy tactics carried out by politicians for years, but never thought they would be used between web browsers.

Universal Studios Hollywood WiFi

At the time of writing this article, it costs at least $105 to get into the Universal Studios Hollywood theme park. So it is really shameless to put a condition like “Your information will be shared with Comcast XFINITY and Universal Theme Parks for promotional purposes” in order to use free WiFi. Just give them a fake email address, and you can use WiFi without being spammed.

Feedback Touchscreen in Restroom

In recent years, a lot of our digital interactions have been revolutionised by simple touch gestures. However, having a touchscreen for feedback at the Malta International Airport’s restrooms is probably taking this too far. I mean it’s ok if you assume everybody washes their hands. But can you really assume that?

The way they ask is also very awkward at best:

“How was your experience at this washroom today?”

Uhhh, do you really want the details?

Stone from the Azure Window

Just a day after the collapse of the Azure Window in Gozo (Malta), with many people mourning the loss of a national icon, an opportunist is selling what he claims to be “original stone from the collapsed Azure Window Gozo (Malta)”:

This person gives a bit more detail in the item description:

“Item specifications : Piece of Azure Window rock approx 100g

“Many are asking how can they be sure that the rock is from the mentioned area? Well all I can say is that I am a local and have access to location in less than 10 min drive. I plan to dive in the area and maybe even collect pieces from the sea bed 😉

“Thanks”

I guess this one needs no further comment.

The Weeping Web of January 2017 (Part 2)

This is a continuation of my previous article, “The Weeping Web of January 2017 (Part 1)“.  It describes more frustrating experiences with websites in 2017, a time when websites and web developers should have supposedly reached a certain level of maturity. Some of the entries here were contributed by other people, and others are from my own experiences.

EA Origin Store

When resetting your password on the EA Origin Store, the new password you choose has a maximum length validation. In this particular case, your password cannot be longer than 16 characters.

This is an incredibly stupid practice, for two reasons. First, we should be encouraging people to use longer passwords, because that makes them harder to brute force. Secondly, any system that is properly hashing its passwords (or, even better, using a hash algorithm plus work factor) will know that the result of a hashed password is a fixed length string (regardless of original input length), so this is not subject to any maximum column length in a database.

Untangled Media

If you scroll through the pictures of the team at Untangled Media, you’ll see that the last one is broken. Ironically, it seems that that person is responsible for content.

Needless to say, broken images give a feeling of neglect that is reminiscent of the mythical broken window from The Pragmatic Programmer.

Outlyer on Eventbrite

Another thing that makes sites (and any written content, for that matter) look unprofessional is typos. If you’re sending an SMS to a friend, a typo might be acceptable. If you’re organising an event to launch a product, three typos in the same sentence don’t give a very good impression.

BRND WGN

The first thing you see on the BRND WGN website is an animation taking up the whole screen, switching around frantically like it’s on drugs:

There are only three things you can do to learn more about what the site has to offer: play a video, click on (literally) a hamburger menu, or scroll down.

While I’m not sure this can be reasonably classified as mystery meat navigation, it does no favours to the visitor who has to take additional actions to navigate the site. While the hamburger icon looks like a cutesy joke, it looks silly on what is supposed to be a professional branding website, and hides the site’s navigation behind an additional layer of indirection.

This is a real pity, because if you scroll to the bottom, the site actually does have well laid out navigation links you can use to get around the site! These should really be the first thing a visitor sees; it makes no sense that they are hidden at the bottom of the page.

I also noticed that if you click on that hand in the bottom-right, you get this creepy overlay:

The only reasonable reaction to this is:

Image credit: taken from here.

Daphne Caruana Galizia

The controversial journalist and blogger who frequently clashes with public figures would probably have a bone to pick with her webmaster if she knew that the dashboard header for her WordPress site was visible for not-logged-in users while she was logged in last week:

While this won’t let anyone into the actual administrative facilities (because a login is still requested), there’s no denying that something went horribly wrong to make all this visible, including Daphne’s own username (not shown here for security reasons).

Identity Malta

The Identity Malta website has some real problems with its HTTPS configuration. In fact, Firefox is quick to complain:

This analysis from Chrome, sent in by a friend, shows why:

Ouch. It defeats the whole point of using SSL certificates if they are not trusted. But that’s not all. Running a security scan against the site reveals the following:

Not only is the certificate chain incomplete, but the scan identified a more serious vulnerability (obfuscated here). An institution dealing with identity should be a little more up to speed with modern security requirements than this.

Another (less important) issue is with the site’s rendering. As you load the page the first time or navigate from one page to another, you’ll notice something happening during the refresh, which is pretty much this:

There’s a list of items that gets rendered into a horizontally scrolling marquee-like section:

Unfortunately, this transformation is so slow that it is noticeable, making the page load look jerky at best.

Battle.net

I personally hate ‘security’ questions, because they’re insecure (see OWASP page, engadget summary of Google study, and Wired article). Nowadays, there’s the additional trend of making them mandatory for a password reset, so if you forget the answer (or intentionally provide a bogus one), you’re screwed and have to contact support.

If you don’t know the answer to the silly question, you can use a game’s activation code (haven’t tried that, might work) or contact support. Let’s see what happens when we choose the latter route.

Eventually you end up in a form where you have to fill in the details of your problem, and have to provide a government-issued photo ID (!). If you don’t do that, your ticket gets logged anyway, but ends up in a status of “Need Info”:

The idea is that you need to attach your photo ID to the ticket. However, when you click on the link, you are asked to login:

…and that doesn’t help when the problem was to login in the first place.

It’s really a pain to have to go through all this crap when it’s usually enough to just hit a “Reset Password” button that sends you an email with a time-limited reset link. Your email is something that only you (supposedly) have access to, so it identifies you. If someone else tried to reset your password, you just ignore the email, and your account is still fine. In case your email gets compromised, you typically can use a backup email address or two-factor authentication involving a mobile device to prove it’s really you.

Security questions are bullshit; they provide a weak link in the security chain and screw up user experience. Let’s get rid of them sooner rather than later.

Malta Health Department

It is a real pity when a government department’s website loses the trust supposedly provided by HTTPS just because it uses a few silly images that are delivered over HTTP.

The Economist

Remember how you could read any premium article on The Times of Malta by just going incognito in your browser (see “The Sorry State of the Web in 2016“)? Seems The Economist has the same problem.

Article limit…

…no article limit…

Remember, client-side validation is not enough!

On a Positive Note, Mystery Meat Navigation

I’m quite happy to see that mystery meat navigation (MMN) seems to be on its way out, no doubt due to the relatively recent trend of modern webites with simple and clear navigation. I haven’t been able to find any current examples of MMN in the first five pages of Google results when searching for local web design companies, so it’s clear that the local web design industry has made great strides compared to when I wrote the original MMN article.

Summary

This is the third article in which I’ve been pointing out problems in various websites, both local and international. After so many years of web development, designs might have become prettier but lots of websites are still struggling with fundamental issues that make them look amateurish, dysfunctional or even illegal.

Here are some tips to do things properly:

  • If you’re accepting sensitive data such as credit cards of passwords as input, you have to have fully-functional HTTPS.
  • Protect yourself against SQL injection by using parameterised queries or a proper ORM.
  • Test your website. Check various kinds of inputs, links, and images. Don’t waste people’s time or piss them off.
  • Use server-side validation as well as client-side validation.
  • Ensure you have proper backup mechanisms. Shit happens.

The Weeping Web of January 2017 (Part 1)

Not even a month has passed since I wrote “The Sorry State of the Web in 2016“, yet I already find myself having to follow up with new material detailing things that should be things of the past. Because in 2017, we really should know better. Some of the entries here were contributed by other people, and others are from my own experiences.

[Credit: image taken from here]

GitLab

You might have heard a few times how a company did something really stupid that messed up its business and reputation, like the Patreon Security Breach. Well, just today, GitLab went down with a bang:

How did that happen?

Ouch. But everyone makes mistakes, right? Let’s see the incident report (emphasis mine):

  1. “LVM snapshots are by default only taken once every 24 hours. YP happened to run one manually about 6 hours prior to the outage
  2. Regular backups seem to also only be taken once per 24 hours, though YP has not yet been able to figure out where they are stored. According to JN these don’t appear to be working, producing files only a few bytes in size.
  3. Disk snapshots in Azure are enabled for the NFS server, but not for the DB servers.
  4. The synchronisation process removes webhooks once it has synchronised data to staging. Unless we can pull these from a regular backup from the past 24 hours they will be lost
  5. The replication procedure is super fragile, prone to error, relies on a handful of random shell scripts, and is badly documented […]
  6. Our backups to S3 apparently don’t work either: the bucket is empty
  7. We don’t have solid alerting/paging for when backups fails, we are seeing this in the dev host too now.

“So in other words, out of 5 backup/replication techniques deployed none are working reliably or set up in the first place. => we’re now restoring a backup from 6 hours ago that worked”

This explains where the name “GitLab” came from: it is a lab run by gits. Honestly, what is the point of having backup procedures if they don’t work, and were never even tested? You might as well save the time spent on setting them up and instead use it for something more useful… like slapping yourself in the face.

Booking.com

Like its airline cousins, booking.com is a bit touchy when it comes to input data. In fact, if you’ve got something like a forward slash or quotes in your address, it will regurgitate some nice HTML entities in the relevant field:

Smart Destinations

The problems I’ve had with my European credit card not being accepted by American websites (usually due to some validation in the billing address) apparently aren’t limited to US airlines. Just yesterday, while trying to pay for a Go Los Angeles card, I got this:

Hoping to sort out the issue, I went to their contact form to get in touch. After taking the time to fill in the required fields:

…I found to my dismay that it doesn’t actually go anywhere:

So much for the response within 24 hours. The destinations may be smart, but the developers not so much.

Ryanair

I’ve been using Ryanair for a while, so I recently thought: why not register an account, to be able to check in faster? So I did that.

Last week, I opted to do my online check-in as a Logged In User™. When I logged in, I got this:

I found out from experience that you’re better off checking in the usual way (e.g. with email address and reservation number). At least it works.

Super Shuttle

Booking with Super Shuttle involves a number of steps, and between each one, you get a brief “loading”-style image:

As you would expect, it sits on top of an overlay that blurs the rest of the page and prevents interaction with it. Unfortunately, this has a bad habit of randomly getting stuck in this situation, forcing you to restart the whole process.

Another thing about Super Shuttle is that you can actually include a tip while you’re booking:

Wait. Why would anyone in his right state of mind want to tip the driver before he has been given a good service? What if the service actually sucks?

Malta VAT Department

If you go to VAT Online Services, and try to login at the “Assigned or Delegated Services” section…

…you see an error page that seems like it survived both World Wars.

Well, at least it’s secure!

To Be Continued…

Adding all the entries for January 2017 into this article would make it too long, so stay tuned for Part 2!

If you have any similar bad experiences with websites, send them in!

Announcing Ultima 1 Revenge

I am currently working on an engine port of Ultima 1: The First Age of Darkness, called Ultima 1 Revenge. This means I am reverse engineering the game files and building a new game engine for it, using C++ and SDL2.

Ultima 1: The First Age of Darkness was one of the first open-world Computer Role Playing Games (CRPGs). Originally released in 1981 and remade for the PC in 1986, Ultima 1 was followed by a series of games that lasted almost 30 years, generated a cult following, inspired countless other RPGs, and pushed the boundaries of technology.

Ultima 1 is a fairly weird game, featuring an unusual combination of medieval fantasy and space travel. The world of Sosaria is being ravaged by the monsters of the evil wizard Mondain. Before you can face him in battle, you have to complete dungeoneering quests in the service of the lords of the land, become a space ace, free a princess, and travel back in time using a time machine.

The 1986 PC remake, on which the Ultima 1 Revenge project is based, is very old technology, by today’s standards. Still, it provides a vast array of learning areas. The game’s graphics are made up of three tilesets (CGA, EGA, and Tandy 1000), giving a choice for the differently powered machines of the time. The game world is stored in a small map file, where each four bits is an index into the tileset you’re using. Space travel is a combination of top-down 2D and first-person views. The dungeons are simple 3D-like line drawings, randomly generated based on a seed stored in the savegame file (so they remain consistent for each playthrough, but change if you start a new game). The different parts of the game run in different executables, and a special savegame file is used to pipe the player state from one to the other. Savegames mostly use 16-bit numbers, with the least significant byte stored first. Decoding the game files is an ongoing effort that powers tools such as the online map viewer I built in 2015, and the engine itself.

Today, I have released a demo of the engine. So if you own a copy of Ultima 1 (if you don’t, you can grab a copy from GOG), grab it from the downloads page, set the path to your original Ultima 1 folder in the settings file, and take a tour of Sosaria!

Making Webpages Printer Friendly

The sample from this article is available at the Gigi Labs BitBucket repository.

Introduction

I am a bit of a living paradox. Despite being a millennial and a techie, I often identify myself better with earlier generations: I like 20th century entertainment (video games, movies and music) a lot better than the more contemporary stuff, I stay away from overhyped tech I don’t need, and… I like to read text printed on good old paper a lot better than reading from a computer monitor or a tiny mobile phone screen.

Back when I launched my first website in 2002, it was common for content-based websites that wanted to be printable to provide a separate “printer-friendly version” of the page. This practice is still common today. It seemed like a lot of hassle to me at the time, though, to have to maintain two separate versions of the same content.

Fortunately, it was not long before I discoverd @media print. Indeed, the same media queries we use nowadays to allow our websites to dynamically adapt to different screen sizes (also known as responsive web design) already existed in limited form back then in CSS2. I might have used @media print as early as 2003. If nowadays we give so much importance to different output devices such as mobile phones and tablet, there is really no reason why we should disregard printers.

Showcase

Microsoft’s article on How to auto scale a cloud service is a fine example of a great printer-friendly page. Look, the page itself has lots of navigation and stuff on the side that you don’t care about when reading a printed page:

But when you go and print, poof, all thet extra stuff is gone, and you are left with a perfect, content-only page:

Not all websites have this consideration for printers. A few months ago, I opened a bug report about the Microsoft Orleans documentation not being printer-friendly. As you can see in the bug report, while the navigation was indeed being hidden, it was still taking up space, resulting in lots of wasted whitespace on the side. There was also a problem with some text overlaying the content. This was identified as a bug in DocFX, the software used to generate the Orleans documentation, and has since been fixed.

Forbes is a much worse offender. Look at how much ink and paper you have to waste on empty space, ads, videos, and stock photos, when all you want to do is read the article:

Well, at least there is some way you can read the content. Let’s now take a look at the Akka .NET documentation, for instance, the Akka.Cluster Overview:

This looks ike something I should easily be able to print and read on a plane, right? Let’s try that.

The Akka .NET team take the prize here, because the printed version of their documentation is a lot more interesting than the version you read online.

Using @media print

Making a page printer-friendly is not rocket science. Essentially, all you need is to hide stuff you don’t need (e.g. navigation) and resize your content to make full use of the available space. You can do this easily with @media print; the rules you specify inside its context will apply only to printing devices:

@media print
{
    /* Hide navigation etc */
}

Let’s take a really simple example of how to do this. This is a website layout that you can create in a few minutes. It consists of a main heading, a left navigation, and a main content section:

<!doctype html>
<html>
    <head>
        <title>My Website</title>
        <link rel="stylesheet" type="text/css" href="style.css">
    </head>
    <body>
        <nav id="leftnav">
            <ul>
                <li>Home</li>
                <li>About</li>
                <li>etc...</li>
            </ul>
        </nav>
        <header>My Website</header>
        
        <section id="main">
            <header>Content!</header>
            <p>Lorem ipsum dolor sit amet, consectetur adipiscing elit. Cras tincidunt mollis tellus, eget maximus enim mollis lacinia. Phasellus suscipit bibendum tristique. Etiam laoreet justo ac erat tempus volutpat. Nam auctor viverra commodo. Duis magna arcu, tristique eget felis sit amet, placerat facilisis ipsum. Sed nibh dolor, congue quis ultricies sit amet, bibendum non eros. Maecenas lectus dolor, elementum interdum hendrerit vitae, placerat a justo. Sed tempus dignissim consectetur. Pellentesque habitant morbi tristique senectus et netus et malesuada fames ac turpis egestas. Praesent tincidunt hendrerit metus, eget egestas odio volutpat scelerisque. Suspendisse sit amet bibendum eros.</p>
            <p>Nullam vestibulum blandit velit, ornare porttitor quam vehicula ac. Nam egestas orci quis orci porttitor, at lacinia risus faucibus. Vestibulum ut purus nibh. Ut ac metus magna. Nunc dictum magna non molestie luctus. Cras ornare dolor nec leo posuere, at ullamcorper lectus cursus. Curabitur pellentesque sem et nibh pellentesque pulvinar. Vestibulum non libero fermentum, luctus libero et, molestie nibh. Etiam ligula enim, mollis et ullamcorper vitae, dapibus eu tortor. Praesent pharetra volutpat orci, non lacinia leo consectetur in. Nulla consequat arcu dignissim eros ultrices, sit amet luctus ipsum ultricies.</p>
            <p>Cras non tellus leo. Cras malesuada sollicitudin mi quis tincidunt. Morbi facilisis fermentum aliquam. Donec tempor orci est, id porta massa varius a. Phasellus pharetra arcu nisl, at eleifend magna rhoncus et. Mauris fermentum diam eget accumsan dignissim. Vivamus pharetra condimentum ante, eget ultricies nisi ornare quis. Aliquam erat volutpat.</p>
            <p>Vestibulum ante ipsum primis in faucibus orci luctus et ultrices posuere cubilia Curae; Suspendisse placerat nibh sit amet erat dignissim rhoncus. Suspendisse ac ornare augue. Proin metus diam, convallis a dolor eget, gravida auctor orci. Pellentesque habitant morbi tristique senectus et netus et malesuada fames ac turpis egestas. Nullam at risus non neque tincidunt aliquam. Nulla convallis interdum imperdiet. Vivamus condimentum erat eget lacus tempor, id vulputate tortor condimentum. Cras ipsum velit, cursus eu sodales vitae, ornare a magna. Mauris ullamcorper gravida vestibulum. Phasellus rhoncus metus nec nulla finibus sagittis. Ut sit amet enim in purus viverra tempor vel convallis tellus.</p>
            <p>Nulla maximus urna non leo eleifend, sed efficitur libero gravida. Curabitur quis velit quam. Vivamus ac sem dui. Suspendisse quis sagittis enim. Cras feugiat nibh in lorem faucibus lobortis. Quisque sit amet nisl massa. Fusce finibus facilisis erat sed dictum. Donec ac enim ut est mattis bibendum pharetra non arcu. In hac habitasse platea dictumst. In mattis, justo non tempor convallis, ex enim luctus velit, a facilisis quam erat nec augue. Nam ac metus vel velit commodo iaculis quis a magna. Vestibulum volutpat sapien lorem, et convallis lectus lacinia non. Vestibulum fermentum varius rutrum. Etiam dignissim leo at pulvinar posuere. Sed metus nibh, commodo vitae tincidunt sit amet, pretium ut nibh. In pellentesque dui egestas, rhoncus velit nec, lobortis magna.</p>
        </section>
    </body>
</html>

We’ll throw in some CSS to put each part in its place and make the whole thing look at least semi-decent:

body
{
    margin-left: 200px;
    font-family: Arial;
    background-color: #EEE;
}

nav#leftnav
{
    position: absolute;
    margin-left: -180px;
    margin-top: 78px;
    width: 150px;
    padding-left: 10px;
    background-color: #E3E3E3;
    border-radius: 8px;
    border: 2px solid #E7E7E7;
}

nav#leftnav ul
{
    padding: 0px;
}

nav#leftnav li
{
    list-style-type: none;
}

body > header
{
    text-align: center;
    font-size: 24pt;
    font-weight: bold;
    padding-bottom: 42px;
    margin-top: 26px;
}

section#main
{
    background-color: #E3E3E3;
    margin-right: 20px;
    padding: 8px;
    border-radius: 8px;
    border: 2px solid #E7E7E7;
}

section#main > header
{
    font-size: 16pt;
    font-weight: bold;
    margin-left: 8px;
    border-bottom: dashed 2px #FFF;
}

So when you fire it up in a browser, it looks like this:

Now if we print this, we get:

Not good! Let’s fix this with a little printer-specific CSS:

@media print
{
    body
    {
        margin-left: 0px; /* remove left indentation */
    }
    
    nav#leftnav
    {
        display: none; /* hide navigation */
    }
    
    section#main
    {
        border: 0px; /* remove border */
    }
}

It looks much better now:

You can see how easy it is to make content fit neatly in a printed page without waste. In fact, in this particular example, we saved having to print a second page for content that is perfectly capable of fitting in a single page.

Not all websites need to be printer-friendly. But if your website is full of content that is meant to be read, then making them printer-friendly is probably a good idea. Given how easy it is, there is no reason why you shouldn’t make a handful of people like me happy. 🙂

The Sorry State of the Web in 2016

When I republished my article “Bypassing a Login Form using SQL Injection“, it received a mixed reception. While some applauded the effort to raise awareness on bad coding practices leading to serious security vulnerabilities (which was the intent), others were shocked. Comments on the articles and on Reddit were basically variants of “That code sucks” (of course it sucks, that’s exactly what the article is trying to show) and “No one does these things any more”.

If you’ve had the luxury of believing that everybody writes proper code, then here are a few things (limited to my own personal experience) that I ran into during 2016, and in these first few days of 2017.

SQL Injection

I was filling in a form on the website of a local financial institution a few days ago, when I ran into this:

It was caused by the apostrophe in my surname which caused a syntax error in the SQL INSERT statement. The amateur who developed this website didn’t even bother to do basic validation, let alone parameterised queries which would also have safeguarded against SQL injection.

Airlines and Apostrophes

My experience with airlines is that they tend to go to the other extreme. In order to keep their websites safe, they simply ban apostrophes altogether. This is a pain in the ass when your surname actually has an apostrophe in it, and airlines stress the importance of entering your name and surname exactly as they show on your passport.

United Airlines, for instance, believe that the surname I was born with isn’t valid:

Virgin America, similarly, takes issue with my whole full name:

We’re in 2017. Even shitty Air Malta accepts apostrophes. All you need to do is use parameterised queries or a proper ORM. Using silly and generic error messages doesn’t help avoid customer frustration.

Plagiarism

Speaking of Air Malta, here’s a classic which they ripped off from some other US airline:

US Federal law? In Malta? Go home, Air Malta. You’re drunk.

Don’t Piss People Off

I’ve had a really terrible experience with booking domestic flights with US airlines. There is always some problem when it comes to paying online with a VISA.

United Airlines, for instance, only accepts payments from a specific set of countries. Malta is not on that list, and there is no “Other” option:

Delta gives a variety of billing-address-related errors depending on what you enter.

Southwest provides fields to cater for payments coming from outside the US:

And yet, you need to provide a US State, Zip Code and Billing Phone Number.

The worst offender, though, is Virgin America. While the overall experience of their AngularJS website is quite pleasant, paying online is a hair-ripping experience. If you choose a country where the State field does not apply (such as Malta, or the UK), a validation error fires in JavaScript (it doesn’t appear in the UI) and does not let you proceed:

It’s almost like the developers of this website didn’t quite test their form. Because developers normally do test their code, right? Right?

Well, when I reported the error to Virgin, and offered to provide a screenshot and steps to reproduce, the support representative gave me this canned bullshit:

“So sorry for the web error. Can recommend using one of our compatible browsers chrome or safari. Clearing your cookies and cache.  If no resolve please give reservations a ring [redacted] or international [redacted] you’ll hear a beep then silence while it transfers you to an available agent.  Thanks for reaching out.~”

I had to escalate the issue just so that I could send in the screenshot to forward to their IT department. Similarly, I was advised to complete the booking over the phone.

Over a month later, the issue is still there. It’s no wonder they want people to book via telephone. Aside from the international call rate, they charge a whooping $20 for a sales rep to book you over the phone.

Use SSL for Credit Card And Personal Details

In July 2016, I wanted to book a course from the local Lifelong Learning unit. I found that they were accepting credit card details via insecure HTTP. Ironically, free courses (not needing a credit card) could be booked over an HTTPS channel. When I told them about this, the response excuse was:

“This is the system any Maltese Government Department have been using for the past years.”

It is NOT okay (and it’s probably illegal) to transmit personal information, let alone credit card details, over an insecure channel. That information can be intercepted by unauthorised parties and leaked for the world to see, as has happened many times before thanks to large companies that didn’t take this stuff seriously.

To make matters worse, Lifelong Learning don’t accept cheques by post, so if you’re not comfortable booking online, you have to go medieval and bring yourself to their department to give them a cheque in person.

I couldn’t verify if this problem persists today, as the booking form was completely broken when I tried filling it a few days ago – I couldn’t even get to the payment screen.

Update 8th January 2017: I have now been able to reproduce this issue. The following screenshots are proof, using the Photo Editing course as an example. I nudged the form a little to the right so that it doesn’t get covered by the security popup.

Update 9th January 2017: Someone pointed out that the credit card form is actually an iframe served over HTTPS. That’s a little better, but:

  • From a security standpoint, it’s still not secure.
  • From a user experience perspective, a user has no way of knowing whether the page is secure, because the iframe’s URL is hidden and the browser does not show a padlock.
  • The other personal details (e.g. address, telephone, etc) are still transmitted unencrypted.

Do Server Side Validation

When Times of Malta launched their fancy new CMS-powered website a few years ago, they were the object of much derision. Many “premium” articles which were behind a paywall could be accessed simply by turning off JavaScript.

Nowadays, you can still access premium articles simply by opening an incognito window in your browser.

Let’s take a simple example. Here’s a letter I wrote to The Times a few years ago, which is protected by the paywall:


Back in 2014, I used to be able to access this article simply by opening it in an Incognito window. Let’s see if that still works in 2017:

Whoops, that’s the full text of the article, without paying anything!

Like those critics of my SQL injection article, you’d think that people today know that client-side validation is not enough, and that it is easy to bypass, and that its role is merely to provide better user experience and reduce unnecessary roundtrips to the server. The real validation still needs to be server-side.

Conclusion

Many people think we’re living in a golden age of technology. Web technology in particular is progressing at a breathtaking pace, and we have many more tools nowadays than we could possibly cope with.

And yet, we’re actually living in an age of terrible coding practices and ghastly user experience. With all that we’ve learned in over two decades of web development, we keep making the same silly mistakes over and over again.

I hope that those who bashed my SQL injection article will excuse me if I keep on writing beginner-level articles to raise awareness.

"You don't learn to walk by following rules. You learn by doing, and by falling over." — Richard Branson