Getting Started with Unity3D on Linux

If you have any sort of interest in game development, you’ve probably heard of Unity3D. And if you’ve used it before, you probably know that it has for a long time been restricted to Windows and Mac in terms of development platforms. That changed recently, when they added support for Linux. In this article, I’ll show you how I set up Unity3D on my Kubuntu 20.04 installation, and if the distribution you’re using is close enough, the same steps will likely work for you as well.

First, go to the Unity3D Download page and grab the Unity Hub.

Download the Unity Hub, then open it.

After Unity Hub has finished downloading, run it. It’s a cross-platform AppImage, so you can either double-click it or run it from the terminal.

You have no valid licence… you filthy peasant!

Register an account on the Unity3D website if you don’t have one already. Once Unity Hub loads, it immediately complains about not having a licence. If you click “Manage License”, it will ask you to login. You can click on the resulting “Login” link, or else click the top-right icon and then “Sign in”, to log into Unity3D from Unity Hub.

This is where you sign in.
Reject cookies and login. Social providers are under the cookie banner.

Click “Reject All” to opt out of cookies. Then, sign in using your email address and password. Alternatively, if you log into your account using a social identity provider, you’ll find different providers’ icons under the cookie banner.

Now you’re back in the Licence page of Unity Hub. Wait a few seconds for it to activate, then click the “Activate New License” button:

After logging in, you can activate a new licence.

In the next window, select whichever options apply to you. If you’re just a hobbyist, Unity3D is free, so you can select the radio buttons as shown below. Click “Done” when you’re ready.

Choose the relevant options. Unity3D is free unless you’re a company making $100k or more.

You now have a licence! Click the arrow at the top-left to go to the Projects section.

Armed with a licence, go out of Preferences and back to the main sections.

If you try to add a new project, you’ll realise that you need to install a version of the Unity3D editor first. Head over to the Installs section to do this.

You can’t create a new project before you install a version of the Unity3D editor.

In the Installs section, click the “Add” button:

Add a version of the Unity3D editor from here.

Choose whichever version you prefer. The recommended LTS version is best if you need stability; otherwise you can use the latest and greatest version with the newest features.

Choose which version of the Unity3D editor you want to install. The recommended LTS is better for stability; if you’re just starting out, you don’t really need that and can go for the newest one instead.

Click “Next”, and you can now choose which platforms you want your builds to target and what documentation you want. If you’re just starting out, keep it simple and just leave the default “Linux Build Support” enabled. You can always add more stuff later if/when you need it.

Choose which platforms you want to target, and which documentation you want to include. If you’re just starting out, you don’t really care.

Click “Done”, and wait for it to install…

Grab some coffee.

When it’s done, head back to the Projects section. Click the “New” button to create a new project.

In the next window, select the type of project (3D by default), give it a name, and select a folder where your Unity3D projects will go (the new project will be created as a subfolder of this). Then click the “Create” button:

Choose project type, name and location.

Wait for it…

Nice loading screen…

And… that’s it! The editor then comes up, and you can begin creating your game.

The Unity3D editor, finally running on Linux.

If you need a quick place to start, check out my “Unity3D: Moving an Object with Keyboard Input” tutorial here at Gigi Labs, as well as my early Unity3D articles at Programmer’s Ranch.

Highlighting Bitmasks with React

The trend of resources (such as memory and disk space) becoming more abundant is still ongoing, and as a result, software development has become quite wasteful. While it’s quite common for an average application to guzzle several gigabytes of RAM nowadays, many people are not even aware that it’s possible to pack several pieces of information into the same byte. In fact, old games have been able to pack several pixels worth of data in a single byte, and this technique is still quite common for bit flags.

Bitmasks in Practice

Let’s consider a simple example: I have an RPG, where a character’s status can be one of the following values:

  • 1 = Paralysed
  • 2 = Poisoned
  • 4 = Diseased
  • 8 = Blind
  • 16 = Hungry
  • 32 = Fatigued

Using a power of two for each value means that I can use a single number to represent a combination of these values. For example:

  • 12 (decimal) = 001100 (binary) = Diseased and Blind
  • 2 (decimal) = 000010 (binary) = Poisoned
  • 63 (decimal) = 111111 (binary) = all six statuses apply

Each of these individual statuses is a flag with a boolean value. They are independent of each other and can be set simultaneously. By combining them into a single variable (called a bitmask) as shown above, we can store them more efficiently, both in memory and on disk.

The downside is that it becomes less readable. In order to know that a value of 21 means Hungry, Diseased and Paralysed, you need to break it down into individual bits and look up what each one means. That’s not a problem; in fact, we’ll build a little utility with React to help with this.

Listing the Bit Flags

We’re going to create a variation of the Filter List As You Type example that will list the individual flags and then highlight them based on user input. Start by creating a new React app. Once that’s done, open src/App.js and remove the logo import as well as everything inside the <header> tag so that you’re left with just this:

import './App.css';

function App() {
  return (
    <div className="App">
      <header className="App-header">

      </header>
    </div>
  );
}

export default App;

Next, create an object that maps the value of a flag to its description. This is just for display purposes. Inside the <header> element, use a simple map() function to display the flags (value and description) in a table:

import './App.css';

function App() {
  const flags = {
    1: 'Paralysed',
    2: 'Poisoned',
    4: 'Diseased',
    8: 'Blind',
    16: 'Hungry',
    32: 'Fatigued'
  };

  return (
    <div className="App">
      <header className="App-header">
        <table>
          <tbody>
          {
            Object.keys(flags).map(x => (
              <tr key={x}>
                <td>{x}</td>
                <td>{flags[x]}</td>
              </tr>))
          }
          </tbody>
        </table>
      </header>
    </div>
  );
}

export default App;

If you run npm start, you should be able to see the list of flags:

Listing the bit flags

Highlighting Bit Flags

Next, we’ll accept a bitmask (as a decimal number) as user input, and use it to highlight the relevant flags. This is very similar to what we did in Filter List As You Type with React, so start off by adding the relevant imports at the top of the file:

import React, { useState } from 'react';

Next, add the following to capture the state of the input field:

const [input, setInput] = useState('');

Add a text field right above the table:

        <input id="input"
          name="input"
          type="text"
          placeholder="Enter a bitmask in decimal"
          value={input}
          onChange={event => setInput(event.target.value)}
        />

Finally, we need to do the highlighting part. For this, we’ll add a getHighlightStyle() helper function, and use it on each row. The following is the full code for this article:

import React, { useState } from 'react';
import './App.css';

function App() {
  const flags = {
    1: 'Paralysed',
    2: 'Poisoned',
    4: 'Diseased',
    8: 'Blind',
    16: 'Hungry',
    32: 'Fatigued'
  };

  const [input, setInput] = useState('');

  const getHighlightStyle = flagValue =>
    (input & flagValue) > 0 ? { backgroundColor: 'red' } : null;

  return (
    <div className="App">
      <header className="App-header">
        <input id="input"
          name="input"
          type="text"
          placeholder="Enter a bitmask in decimal"
          value={input}
          onChange={event => setInput(event.target.value)}
        />
        <table>
          <tbody>
          {
            Object.keys(flags).map(value => (
              <tr key={value} style={getHighlightStyle(value)}>
                <td>{value}</td>
                <td>{flags[value]}</td>
              </tr>))
          }
          </tbody>
        </table>
      </header>
    </div>
  );
}

export default App;

We’re using the bitwise AND operator (&) to do a binary AND between the input and each flag. Let’s say the user enters 3 as the input. That’s 000011 in binary; so:

  • 000011 AND 000001 (Paralysed) results in 000001 (greater than zero);
  • similarly, 000011 AND 000010 (Poisoned) results in 000010 (also greater than zero);
  • however, 000011 AND 000100 (Diseased) results in 000000 (not greater than zero);
  • and so on.

This is a common way of determining whether individual bits are set, and it works quite nicely:

Flags are highlighted based on the bitmask in the text field

So that’s it: we’ve made a simple tool with React to help make sense of bitmasks, and hopefully learned a bit about bitmasks and bitwise operators along the way.

Pitfalls of AutoMapper

AutoMapper is a very popular .NET library for automatically mapping objects (e.g. database model to DTO). The mapping is done without having to write code to explicitly copy values over (except in cases where the mapping isn’t obvious, such as where property names don’t match or where transformations are necessary).

Surely, a library with over 144 million downloads on NuGet that lets you write less code is nothing but awesome, right? And yet, AutoMapper is one of four libraries that caused me nothing but grief, along with MediatR (by the same author), Entity Framework, and Fluent Validations.

When writing code, there are four main things I value:

  • Validity: the code does what it’s supposed to do without defects.
  • Performance: the code isn’t inefficient in ways that impact the application.
  • Maintainability: the code is easy to understand, reason about and debug. This point is entirely pragmatic, as opposed to the “clean code” camp that idolises code for its own sake.
  • Timeliness: the chosen approach should enable the solution to be delivered in a timely manner.

Thus, as I dissect AutoMapper in this article, I will do so in terms of these points.

Compile-Time vs Runtime

In my 2017 article “The Adapter Design Patterns for DTOs in C#“, I briefly mentioned AutoMapper and listed three reasons why I wasn’t a fan. The first of these was:

“The mapping occurs dynamically at runtime. Thus, as the DTOs evolve, you will potentially have runtime errors in production. I prefer to catch such issues at compile-time.”

The Adapter Design Patterns for DTOs in C#

For me, this is still one of the most important reasons to avoid AutoMapper. The whole point of using a statically typed language is that you can avoid costly runtime problems by catching errors at compile-time. Yet, using AutoMapper bypasses the compiler entirely, and compares objects dynamically at runtime.

This inevitably leads to bugs as the software evolves. Software Development Sucks‘ 2015 article “Why Automapping is bad for you” (hereinafter referred to as just “the SDS article”) has a very good example illustrating how simply renaming a property will cause the mapping to silently break (i.e. a field will go missing) without the compiler ever complaining.

Mapping as Logic

In recent years, declarative forms of programming have become a lot more popular, perhaps due to the growing influence of functional programming. In .NET libraries, this approach tends to manifest itself in the form of fluent syntax, which you can find in ASP .NET Core pipeline configuration, AutoMapper mapping configuration, and Fluent Validation among many other places.

However, mapping is actually logic, and you can’t treat it like declarative configuration. How do you debug it? How do you understand what it’s doing when complex object graphs are involved? That’s right, you can’t. Mapping needs to happen in code that you can reason about and debug, just like any other logic.

Also, teams that use AutoMapper normally end up with heaps of these declarative mapping rules, which defeats the purpose of not writing mapping code in the first place. While I can understand that there will be less properties to map manually, it’s not like it takes hours to write a few extra lines of code to map a few properties.

External Dependencies

The biggest problems I’ve seen with the use of AutoMapper relate to when certain properties are computed based on external dependencies, such as getting something from an API or database, or some expensive computation. To be fair, these problems are not intrinsic to AutoMapper at all, but they are still a valid part of this discussion.

Let’s say we have a UserId property that needs to be mapped to a full User (object) property in a destination object. This would require asynchronously requesting the User object from a database (or cache, or API, etc). However, the first obstacle we face is that AutoMapper doesn’t support async resolvers, and never will.

So how do we map something that requires an async call? Well, the first way I saw people do this is by running the async code synchronously, calling the .Result property. As I hope you’re aware, this is a terrible idea that can (and did) result in deadlocks (see “Common Mistakes in Asynchronous Programming with .NET“).

But there’s more to it than that. If you have any expensive call, async or otherwise, you should probably be doing it in batches rather than for each object you map (see “Avoid await in Foreach“), because it makes a huge difference in performance.

So, it’s actually a good thing that AutoMapper doesn’t support async mapping operations, because they shouldn’t be done as part of the DTO mapping at all, whether you’re using AutoMapper or doing it manually. It violates both the Single Responsibility Principle (mapping code doing more than just mapping) and Dependency Inversion (you’re hiding dependencies within your mapping code, possibly by using a Service Locator… how do you write unit tests for that?), and it also has very serious stability and performance consequences as mentioned above.

Reflection

Libraries that do runtime magic like AutoMapper or inversion of control (IoC) containers use Reflection, which by its very nature is slow. While this is not likely to have an impact in most cases, and therefore we should not dismiss such tools before ascertaining that they have a measurable impact, it is still useful to be aware of this.

AutoMapper may result in performance costs both at startup (when mapping configurations are set up) and during processing (when actually mapping happens), and these costs might not be negligible for larger projects. As a result, it is good to measure these costs to ensure they don’t get out of hand.

So What Should We Use Instead?

As it turns out, the C# language is perfectly capable of doing DTO mapping, and no library is actually necessary (the same argument applies to Fluent Validations, by the way).

The SDS article suggests simply using methods for mapping, and goes on to list the benefits in terms of refactoring, testing, performance and code quality. My own “The Adapter Design Patterns for DTOs in C#” takes this a step further and suggests using extension methods as an elegant, decoupled solution.

Productivity

You might have noticed that I’ve talked a lot about Validity, Performance and Maintainability, but I haven’t really mentioned Timeliness at all. Surely AutoMapper is a time-saver, as we have to write a lot less code? Well, no, actually.

The SDS article compares manual mapping to AutoMapper, and concludes that the latter actually requires a lot more effort (it’s not all about lines of code). I also argued in “The Adapter Design Patterns for DTOs in C#” that “writing AutoMapper configuration can get very tedious and unreadable, often more so than it would take to map DTOs’ properties manually”.

But beyond that, as I mentioned earlier, it actually doesn’t take a lot of time to write a few lines of code to map properties from one object to another. In fact, in practice, I’ve spent way more time fixing problems resulting from AutoMapper, or in discussions about all the points I’ve written in this article.

The only time when automatic mapping can really save time is when you have hundreds of DTOs and you don’t want to manually map them one by one. This is a one-time job, and should be done by code generation, not runtime mapping. Code generation has been around for a long time, and is given particular prominence in “The Pragmatic Programmer” by Andrew Hunt and David Thomas.

In the .NET world, code generation was most famously used by the Entity Framework, when the old Database First approach used T4 templates to generate model classes based on database tables. While T4 templates never really became popular, and they are no longer supported by .NET Core, it is not a big deal to write some code to read classes from a DLL via Reflection and spit out mapping code based on their properties. In fact, I wouldn’t be surprised if something that does this already exists.

Update 22nd April 2021: It does! Cezary Piątek developed MappingGenerator, a Visual Studio plugin that quickly generates mapping logic from one DTO to another.

Code generation is preferable over runtime automapping because it’s done as a one-time, offline process. It does not affect build time or startup time, does not cause problems at runtime (the generated code still has to be compiled), and does not need additional libraries or dependencies within the project. The generated code can easily be read, debugged and tested. And if there are any properties that require custom logic that can’t be generated as part of this bulk process, you just take the generated code and manually add to it (which you would have done anyway with AutoMapper).

Conclusion

Laziness is often promoted as a good thing in software developers, because it makes them automate things. However, it’s not okay to automate things in a way that sacrifices validity, performance, maintainability or timeliness.

Further reading

This section was added on 22nd April 2021.

Gathering Net Salary Data with Puppeteer

Tax is one of those things that makes moving to a different country difficult, because it varies wildly between countries. How much do you need to earn in that country to maintain the same standard of living?

You can, of course, use an online salary calculator to understand how much net salary you’re left with after deducting tax and social security contributions, but this only lets you sample specific salaries and doesn’t really give you enough information to assess how the impact of tax changes as you earn more. Importantly, you can’t use these tools to draw a graph for each country and compare.

Malta Salary Calculator by Darren Scerri

Fortunately, however, these tools have already done the heavy lifting by taking care of the complex calculations. To build a graph, all we really need to do is to take samples at regular intervals, say, every 1,000 Euros. Since that is very tedious to do by hand, we’ll use a browser automation tool to do this for us.

Enter Puppeteer

Puppeteer, as the homepage says, “is a Node library which provides a high-level API to control Chrome or Chromium”, which is pretty much what we need for this job. It also gives us what we need to get started. In a new folder, run the following to install the puppeteer dependency:

npm i puppeteer

Then, create a new file (e.g. netsalary.js) and add the starter code from the Puppeteer homepage. We’ll use this as a starting point:

const puppeteer = require('puppeteer');

(async () => {
  const browser = await puppeteer.launch();
  const page = await browser.newPage();
  await page.goto('https://example.com');
  await page.screenshot({ path: 'example.png' });

  await browser.close();
})();

Getting Salary Data for Malta

In this particular exercise, we’ll get the salary data for Malta using Darren Scerri’s Malta Salary Calculator, which is relatively easy to work with.

Before we write any code, we need to understand the dynamics of the calculator. We do this via the browser’s developer tools.

Whenever you change the value of the gross salary input field (that has the “salary” id in the HTML), a bunch of numbers get updated, including the yearly net salary (which has the “net-yearly-result” class) which is what we’re interested in.

Just by knowing how we can reach the relevant elements, we can write our first code to retrieve the input (gross salary) and output (yearly net salary) values to make sure we know what we’re doing:

const puppeteer = require('puppeteer');

(async () => {
  const browser = await puppeteer.launch();
  const page = await browser.newPage();
  await page.goto('http://maltasalary.com/');
  
  // Gross salary
  const grossSalaryInput = await page.$("#salary");
  const grossSalary = await page.evaluate(element => element.value, grossSalaryInput);
  console.log('Gross salary: ', grossSalary);
  
  // Net salary
  const netSalaryElement = await page.$('.net-yearly-result');
  const netSalary = await page.evaluate(element => element.textContent, netSalaryElement);
  console.log('Net salary: ', netSalary);

  await browser.close();
})(); 

Here, we’re using the page.$() function to locate an element the same way we would using jQuery. Then we use the page.evaluate() function to get something from that element (in this case, the value of the input field). We do the same for the net salary, with the notable difference that in the page.evaluate() function, we get the textContent property of the element instead.

If we run this (node netsalary.js), we should get the same default values we see in the online salary calculator:

We managed to retrieve the gross and net salaries from the online calculator.

Text Entry

That was easy enough, but it used the default values that are present when the page is loaded. How do we manipulate the input field so that we can enter arbitrary gross salary values and later pick up the computed net salary?

The simplest way to do this is by simulating keyboard input as follows:

const puppeteer = require('puppeteer');

(async () => {
  const browser = await puppeteer.launch();
  const page = await browser.newPage();
  await page.goto('http://maltasalary.com/');
  
  const grossSalary = 30000;
  
  // Gross salary - keyboard input
  await page.focus("#salary");
  
  for (var i = 0; i < 6; i++)
    await page.keyboard.press('Backspace');
  
  await page.keyboard.type(grossSalary.toString());
  
  // Net salary
  const netSalaryElement = await page.$('.net-yearly-result');
  const netSalary = await page.evaluate(element => element.textContent, netSalaryElement);
  console.log('Net salary: ', netSalary);

  await browser.close();
})(); 

Here, we:

  1. Focus the input field, so that whatever we type goes in there.
  2. Press backspace six times to erase any existing gross salary in the field (if you check the online calculator, you’ll see it can take up to six digits).
  3. Type in the string version of our gross salary, which is a hardcoded constant with a value of 30,000.

The result I get when I run this matches what the online calculator gives me. I guess I must be doing something right for once in my life.

Net salary:  22,805.44

Pulling Net Salary Data in a Range

So now we know how to enter a gross salary and read out the corresponding net salary. How do we do this at regular intervals within a range (e.g. every 1,000 Euros between 15,000 and 140,000)? Easy. We write a loop.

In practice, there’s a little timing issue between iterations, so I also needed to nick a very handy sleep function off Stack Overflow and put a very short delay after doing the keyboard input, to give it time to update the output values.

const puppeteer = require('puppeteer');

function sleep(ms) {
  return new Promise(resolve => setTimeout(resolve, ms));
}

(async () => {
  const browser = await puppeteer.launch();
  const page = await browser.newPage();
  await page.goto('http://maltasalary.com/');
  
  console.log('Gross Net');
  
  for (var grossSalary = 15000; grossSalary <= 140000; grossSalary += 1000) {
    // Gross salary - keyboard input
    await page.focus("#salary");
  
    for (var i = 0; i < 6; i++)
      await page.keyboard.press('Backspace');
  
    await page.keyboard.type(grossSalary.toString());
    await sleep(10);
  
    // Net salary
    const netSalaryElement = await page.$('.net-yearly-result');
    const netSalary = await page.evaluate(element => element.textContent, netSalaryElement);

    console.log(grossSalary, netSalary);
  }

  await browser.close();
})(); 

This has the effect of outputting a pair of headings (“Gross Net”) followed by gross and net salary pairs:

Outputting the gross and net salaries in steps of 1,000 Euros (gross) at a time.

Making a Graph

Now that we have a program that spits out pairs of gross and net salaries, we can make a graph out of this data. First, we dump all this into a file.

node netsalary.js > malta.csv

Although this is technically not really CSV data, it’s still very easy to open in spreadsheet software. For instance, when you open this file using LibreOffice Calc, you get the Text Import screen where you can choose to use space as the separator. This makes things easier given that the net salaries contain commas.

Choose Space as the separator to load the data correctly.

Once the data is in a spreadsheet, producing a chart is a relatively simple matter:

Graph showing how net salary changes with gross salary in Malta.

Now, this graph might look a little lonely, but you can already gather interesting insight by noticing its gradient and the fact that it isn’t entirely straight.

After doing this exercise for multiple countries, it’s fascinating to see how their lines compare when plotted on the same chart.

Aside from the allure of data analysis, I hope this article served to show how easy it is to use Puppeteer to perform simple browser automation, beyond the obvious UI automation testing.

Jack of All Trades, Master of None

“Jack of all trades, master of none,” I’ve heard IT industry professionals scoff arrogantly several times in the past. That was their judgement of polyglot programmers, full stack developers, or any other people who had dabbled in more than one area of the kind of work we do.

Full stack. Heh heh. I took this picture at an eatery in San Francisco in February 2017.

I get where they’re coming from. Our industry is a very complicated one, and it’s really hard to learn much of anything if you don’t focus. Whether we’re talking about backend, frontend, databases, NoSQL and so on, there is an overwhelming number of technologies to discover and learn, and the information overload on the internet doesn’t help digest them.

The conventional wisdom is to be a “jack of all trades, master of one“, meaning you learn a number of different things but try to excel at at least one of them. This is great advice, but I’ve very rarely seen it happen in practice. People tend to either specialise in one thing in depth, or have a superficial knowledge of different things. Which of these would you choose?

Being just a master of one is a real problem I’ve seen for a long time. People who have only focused on one programming paradigm throughout their training and career tend to have trouble thinking outside the box and finding different ways to solve problems. Backend and frontend skills don’t easily transfer across, and most full stack developers are typically much stronger at one of these than the other. Most developers don’t understand enough about security, infrastructure and architecture.

The way I see it, that’s pretty bad. A developer who knows nothing about servers is a bit like a centre-forward who is so good that he can dribble past the opposing team’s defence, but fails to score every time.

That’s why even a semi-decent University education in this field doesn’t just teach programming. Learning CPU architecture, systems programming, Assembly language or C gives an idea what happens under the hood and teaches a certain appreciation of resources and doing things efficiently, a perspective that people having only experience with high-level languages often dismiss. Having a basic grasp of business and management prevents developers from idolising their code (“clean code” anyone?) and helps them to focus on solving real problems. Understanding a little about infrastructure helps them better understand architectural and security concerns that code-focused developers will often ignore.

Universities, in fact, produce instances of “jack of all trades, master of none”. Let’s face it: when you graduate, although your scores might give you confidence, you don’t really know much of anything. But thanks to your holistic training, you’re able to understand a variety of different problems, and gradually get a deeper understanding of what you need to solve them.

I also think there is a place for the “master of one” type of people who specialise very strongly in one area and mostly ignore the rest. I imagine these would be academics, or R&D specialists in big companies like Google. But, as much as we like drawing inspiration and borrowing ideas from successful companies, we always have to keep the context in mind. Who are we kidding? Most of us mere mortals don’t work at companies anything like Google.

So what I’m saying here is: it’s not bad to be a “jack of all trades, master of none” in our field. It is obviously better if you’re also “master of one”, but if I had to choose, I’d be (or hire) the one with the broad but superficial knowledge. Because when you are aware that something exists, then it’s not a huge leap to research it in more detail and get to the depth you need. It’s actually a good strategy to learn depth on an as-needed basis, especially given that you’ll never have enough time to learn everything in depth.

As in life, I’d rather know a little bit about many things that interest me, than bury myself in one thing and otherwise give the impression that I’ve been living under a rock.

"You don't learn to walk by following rules. You learn by doing, and by falling over." — Richard Branson