Tag Archives: Linux

Scripting Backups with bash on Linux

Over the years, Linux has gone from being something exclusively for the hardcore tech-savvy to something accessible to all. Modern GUIs provide a friendly means for anybody to interact with the operating system, without needing to use the terminal.

However, the terminal remains one of the areas where Linux shines, and is a great way for power users to automate routine tasks. Taking backups is one such mundane and repetitive activity which can very easily be scripted using shell scripts. In this article, we’ll see how to do this using the bash shell, which is found in the more popular distributions such as Ubuntu.

Simple Folder Backup with Timestamp

Picture this: we have a couple of documents in our Documents folder, and we’d like to back up the entire folder and append a timestamp in the name. The output filename would look something like:

Documents-2020.01.21-17.25.tar.gz

The .tar.gz format is a common way to compress a collection of files on Linux. gzip (the .gz part) is a powerful compression algorithm, but it can only compress a single file, and not an entire folder. Therefore, a folder is typically packaged into a single .tar file (historically standing for “tape archive”). This tarball, as it’s sometimes called, is then compressed using gzip to produce the resulting .tar.gz file.

The timestamp format above is something arbitrary, but I’ve been using this myself for many years and it has a number of advantages:

  1. It loosely follows the ISO 8601 format, starting with the year, and thus avoiding confusion between British (DD/MM) and American (MM/DD) date notation.
  2. For the same reason, it allows easy time-based sorting of backup files within a directory.
  3. The use of dots and dashes comes in handy when the filename is too long and is wrapped onto multiple lines. Related parts (e.g. the year, month and day of the date) stick together, but separate parts (e.g. the date and time) can be broken onto separate lines. Also, this takes up less (screen) space than other methods (e.g. using underscores).

To generate such a timestamp, we can use the Linux date command:

$ date
Tue 21 Jan 2020 05:35:34 PM UTC

That gives us the date, but it’s not in the format we want. For that, we need to give it a parameter with the format string:

$ date +"%Y.%m.%d-%H.%M.%S"
2020.01.21-17.37.14

Much better! We’ll use this shortly.

Let’s create a backups folder in the home directory as a destination folder for our backups:

$ mkdir ~/backups

We can then use the tar command to bundle our Documents folder into a single file (-c is to create, and -f is for file archive):

$ tar -cf ~/backups/Documents.tar ~/Documents

This works well as long as you run it from the home directory. But if you run it from somewhere else, you will see an unexpected structure within the resulting .tar file:

Instead of putting the Documents folder at the top level of the .tar file, it seems to have replicated the folder structure leading up to it from the root folder. We can get around this problem by using the -C switch and specifying that it should run from the home folder:

$ tar -C ~ -cf ~/backups/Documents.tar Documents

This solves the problem, as you can see:

Now that we’ve packaged the Documents folder into a .tar file, let’s compress it. We could do this using the gzip command, but as it turns out, the tar command itself has a -z switch that produces a gzipped tarball. After adding that and a -v (verbose) switch, and changing the extension, we end up with:

$ tar -C ~ -zcvf ~/backups/Documents.tar.gz Documents
Documents/
Documents/some-document.odt
Documents/tasks.txt

All we have left is to add a timestamp to the filename, so that we can distinguish between different backup files and easily identify when those backups were taken.

We have already seen how we can produce a timestamp separately using the date command. Fortunately, bash supports something called command substitution, which allows you to embed the output of a command (in this case date) in another command (in this case tar). It’s done by enclosing the first command between $( and ):

$ tar -C ~ -zcvf ~/backups/Documents-$(date +"%Y.%m.%d-%H.%M.%S").tar.gz Documents

As you can see, this produces the intended effect:

Adding the Hostname

If you have multiple computers, you might want to back up the same folder (e.g. Desktop) on all of them, but still be able to distinguish between them. For this we can again use command substitution to insert the hostname in the output filename. This can be done either with the hostname command or using uname -n:

$ tar -C ~ -zcvf ~/backups/Desktop-$(uname -n)-$(date +"%Y.%m.%d-%H.%M.%S").tar.gz Desktop

Creating Shell Scripts

Although the backup commands we have seen are one-liners, chances are that you don’t want to type them in again every time you want to take a backup. For this, you can create a file with a .sh extension (e.g. backup-desktop.sh) and put the command in there.

However, if you try to run it (always preceded by a ./), it doesn’t actually work:

The directory listing in the above screenshot shows why this is happening. The script is missing execution permissions, which we can add using the chmod command:

$ chmod 755 backup-desktop.sh

In short, that 755 value is a representation of permissions on a file which grants execution rights to everybody (see Chmod Calculator or run man chmod for more information on how this works).

A directory listing now shows that the file has execution (x) permissions, and the terminal actually highlights the file in bold green to show that it is executable:

Running Multiple Scripts

As we start to back up more folders, we gradually end up with more of these tar commands. While we could put them one after another in the same file, this would mean that we always backup everything, and we thus lose the flexibility to back up individual folders independently (handy when the folders are large and backups take a while).

For this, it’s useful to keep separate scripts for each folder, but then have one script to run them all (see what I did there?).

There are many ways to run multiple commands in a bash script, including simply putting them on separate lines one after another, or using one of several operators designed to allow multiple commands to be run on the same line.

While the choice of which to use is mostly arbitrary, there are subtle differences in how they work. I tend to favour the && operator because it will not carry out further commands if one has failed, reducing the risk that a failure goes unnoticed:

$ ./backup-desktop.sh && ./backup-documents.sh

Putting this in a file called backup-all.sh and executing it has the effect of backing up both the Desktop and the Documents folders. If I only want to backup one of them, I can still do that using the corresponding shell script.

Conclusion

We have thus far seen how to:

  • Package a folder in a compressed file
  • Include a timestamp in the filename
  • Include the hostname in the filename
  • Create bash scripts for these commands
  • Combine multiple shell scripts into a single one

This helps greatly in automating the execution of backups, but you can always take it further. Here are some ideas:

  • Use cron to take periodic backups (e.g. daily or weekly)
  • Transfer the backup files off your computer – this really depends what you are comfortable with, as it could be an external hard drive, or another Linux machine (e.g. using scp), or a cloud service like AWS S3.

How to Communicate with Windows Machines from Linux

Transitioning from Windows to Linux is a pleasant experience, but not one for the faint-hearted. There are a lot of things that can take a while to learn: the different filesystem structure, new applications, and the terminal.

If you’ve been stuck with Windows for a long time, chances are that you are not going to switch to Linux entirely in one day and forget about Windows. You probably still want to access resources on that Windows machine, and that, for me, was one of the biggest hassles. Not because it is difficult, but because there are several steps along the way (on both the Windows and Linux sides), and it is really easy to miss one.

The commands in this article have been executed on Kubuntu, and are likely to work on any similar Debian-based distribution.

Ping

Let’s say you’re running an SVN server on your Windows machine, and you’d like to communicate with it from Linux. In order to find that Windows machine, you could try looking up its IP. However, home networks typically use DHCP, which means that a machine’s IP tends to change over time. So while using the IP could work right now, you will likely have to update your configuration again tomorrow.

You could allocate a static IP for this, but a much easier option is to simply look up the name of the machine instead of the IP. You can find out the machine’s name using the hostname command, which works on both Windows and Linux. Once we know the name of the Windows machine, we can try pinging it from Linux to see whether we can reach it:

daniel@orion:~$ ping windowspc
ping: windowspc: No address associated with hostname

That does not look very promising. Unfortunately, Linux machines can’t resolve Windows DNS out of the box. In order to get this working, we first need to install a couple of packages:

daniel@orion:~$ sudo apt install winbind libnss-winbind

After that, we need to edit the /etc/nsswitch.conf file, which on a fresh Kubuntu installation would look something like this:

# /etc/nsswitch.conf
#
# Example configuration of GNU Name Service Switch functionality.
# If you have the `glibc-doc-reference' and `info' packages installed, try:
# `info libc "Name Service Switch"' for information about this file.

passwd:         files systemd
group:          files systemd
shadow:         files
gshadow:        files

hosts:          files mdns4_minimal [NOTFOUND=return] dns
networks:       files

protocols:      db files
services:       db files
ethers:         db files
rpc:            db files

netgroup:       nis

Use whichever editor you prefer, to update the highlighted line above to this:

hosts:          files mdns4_minimal [NOTFOUND=return] dns wins mdns4

If you try pinging again, it should now work. No restart is necessary.

daniel@orion:~$ ping windowspc
PING windowspc (192.168.1.73) 56(84) bytes of data.
64 bytes from 192.168.1.73 (192.168.1.73): icmp_seq=1 ttl=128 time=614 ms
64 bytes from 192.168.1.73 (192.168.1.73): icmp_seq=2 ttl=128 time=519 ms
64 bytes from 192.168.1.73 (192.168.1.73): icmp_seq=3 ttl=128 time=441 ms
64 bytes from 192.168.1.73 (192.168.1.73): icmp_seq=4 ttl=128 time=55.2 ms
64 bytes from 192.168.1.73 (192.168.1.73): icmp_seq=5 ttl=128 time=2.67 ms
64 bytes from 192.168.1.73 (192.168.1.73): icmp_seq=6 ttl=128 time=510 ms
64 bytes from 192.168.1.73 (192.168.1.73): icmp_seq=7 ttl=128 time=430 ms
64 bytes from 192.168.1.73 (192.168.1.73): icmp_seq=8 ttl=128 time=48.9 ms
^C
--- windowspc ping statistics ---
9 packets transmitted, 8 received, 11.1111% packet loss, time 12630ms
rtt min/avg/max/mdev = 2.671/327.520/613.590/232.503 ms

Resolving the hostname can sometimes take time. If you’re using a client application that can’t seem to resolve the Windows machine name, give it a few seconds, or try pinging it again. It should work after that.

Update 5th December 2019: If this doesn’t work, there are a couple of things I’ve seen recommended. One is to move the wins entry in /etc/nsswitch.conf to right after the files entry. Another is to try restarting the winbind service and see whether it makes a difference: sudo systemctl restart winbind.

Windows File Share

Another important aspect to interoperability between Windows and Linux is how to pass files between them. Fortunately, Linux comes with software called Samba that allows it to see and work with Windows file shares.

Before we do this, we need to create a shared folder on Windows. To do this, create a new folder (e.g. named share) on your Windows machine, then right click on it and select Properties. In the Sharing tab, there’s a button that says Advanced Sharing:

Click on it, and in the next modal window, check the box that says Share this folder. You can then OK all the way out without making further changes.

Through Kubuntu’s file manager application, called Dolphin, you can navigate to any Windows file shares visible on the network, even if you haven’t done the setup in the previous section.

To do this, select Network from the left, then double-click Shared Folders (SMB):

Next, select Workgroup:

You should now be able to see any Windows or Linux machines. Select the icon with the name of your Windows machine.

You should be prompted for credentials, and at that stage enter the same username and password that you use to login on Windows.

We can now see the shared folders on the Windows machine, including the shared folder we created earlier:

If, on the Windows side, we drop a file into that share folder, we can see it from Linux, and we are perfectly able to copy it over:

Unfortunately however, the same is not yet true in reverse. If we try to copy a file from the Linux machine into share, we get a lousy Access denied error:

It seems to be a permissions issue, so let’s go back on Windows and see what we might have missed. If we right click the folder and select Properties, we notice that the folder appears to be read-only:

This in fact has nothing to do with the problem, and attempting to change it has no effect.

Instead, what we need to do is go back to that Advanced Sharing modal window (via the Advanced Sharing button in the Properties’ Sharing tab). Click the Permissions button to see who has access to that folder. It seems like Everyone is listed but only has Read access. Please resist the temptation or other internet advice to give full access to Everyone, and instead look up the user you normally use to log into Windows:

You can then give your user full control:

You can now drop a file into the share folder from Linux without any problems:

Summary

Talking to a Windows machine from Linux is possible, but slightly tricky to set up.

In order for client applications on Linux to talk to server applications on Windows, install the winbind and libnss-winbind packages, and edit /etc/nsswitch.conf to enable DNS resolution for Windows machines. Use ping to verify that the hostname is beig resolved.

To share files between Windows and Linux, set up a shared folder on Windows. Add your Windows user to the list of people who can access the folder, giving it both read and write permissions. Then, from Linux, use the file manager application’s existing Samba integration to reach and work with the shared folder.

Running Legacy Windows Programs on Linux with WINE

I have a few really old Windows programs from the Windows 95 era that I never ended up replacing. Nowadays, these are really hard to run on Windows 10. Ironically, it is quite easy to run them on Linux, thanks to WINE:

“Wine (originally an acronym for “Wine Is Not an Emulator”) is a compatibility layer capable of running Windows applications on several POSIX-compliant operating systems, such as Linux, macOS, & BSD. Instead of simulating internal Windows logic like a virtual machine or emulator, Wine translates Windows API calls into POSIX calls on-the-fly, eliminating the performance and memory penalties of other methods and allowing you to cleanly integrate Windows applications into your desktop.”

One such program is this Family Tree software that came with the July 2001 issue of PC Format magazine.

To run this, we first need to install WINE, which on Ubuntu (or similar) would work something like this:

sudo apt-get install wine

After popping in the PC Format CD containing the software, simply locate the autorun executable. Then run the wine command, passing this executable (in this case PCF124.exe) as an argument:

After inserting the CD, locate the autorun executable, and run it using WINE. Although it’s a Windows program, it works just fine.

Selecting Family Tree 2 from the menu runs the corresponding installer. Although this expects a Windows-like filesystem and writes to a Windows registry, WINE has no problem mapping these out.

Select the install location on what looks like a Windows filesystem.
Doesn’t this make you feel nostalgic?

When this finishes, the program is actually installed, and can be found and run from the application menu of whatever desktop environment you’re using (in my case, Plasma by KDE):

Running Family Tree 2.0, we get an error that says “Please install default printer”.

For some bizarre reason, this particular family tree software requires a printer to be installed, and will not work without one. While you probably won’t have this problem, for me it was a tough one that left me wondering for a while. I managed to solve it only by asking for help on Ask Ubuntu and getting an extremely insightful answer:

“When you install printer-driver-cups-pdf (or cups-pdf for Ubuntu 15.10 and earlier) a PDF printer is added which saves the printed files in ~/PDF/. All the printers installed in your Ubuntu OS also work from WINE, you don’t need to do anything about it.
But:
“If you just normally installed CUPS on your 64-bit Ubuntu (uname -r gives x86_64 if it is 64-bit), this won’t work when you run a 32-bit software like yours from 1995 presumably is. The solution in this case is to install the 32-bit CUPS library, so that 32-bit WINE is also able to find your printers:”

sudo apt install libcups2:i386

Sure enough, that worked when I did this on a virtual machine on another laptop, but not on this one. This time, I simply needed to install cups-pdf, because the CPU architecture is different.

Family Tree 2.0 is running on Linux Kubuntu 19.10, thanks to WINE.

As you can see, this Windows-95-era piece of software is now working flawlessly on Linux. Once this is done, don’t forget to eject the CD (the eject command in the terminal has been a fun discovery for me) to unmount it from the filesystem. If you need to uninstall a Windows program you installed via WINE, you can do so directly from your desktop environment’s application menu. And if you need go deeper, WINE’s filesystem is located in the hidden .wine directory under your home folder.

The State of Drag and Drop in Linux

A few months ago, looking for a replacement for Windows (which always finds new ways to get on my nerves), I spent a couple of weeks playing with Linux Mint with MATE desktop. During this test drive, one of the annoyances I came across was the inability to drag a URL from Chromium’s address bar to create a link on the desktop. I literally ended up asking for help, and still didn’t figure it out.

Creating a URL shortcut on a Windows 10 desktop by dragging the padlock icon in Chrome

In Windows, this is something I’ve been doing for many, many years. It’s not rocket science. You drag the padlock icon next to the address bar onto your desktop and a shortcut is created, pointing to that URL.

Ubuntu 19.10

Since Ubuntu 19.10 was released a week and a half ago, I thought I’d try it out. The first thing I figured I’d make sure was that I could drag and drop links to the desktop. Ubuntu is one of the most popular and mature operating systems around. Surely they’d support such a basic usability feature, right?

Ubuntu 19.10 doesn’t let you drag links to the desktop.

Well, it turns out that dragging links from default browser Firefox to the desktop has no effect whatsoever. Odd, isn’t it? Let’s try dragging that link to some other folder instead.

We try dragging a link from Firefox to the Documents folder
“Drag and drop is not supported. An invalid drag type was used.”

That’s annoying. I mean, drag and drop is a really basic feature that has been around forever. Let’s try dragging a file from one folder to another… obviously that’s going to work, no?

It looks like it’s going to work, but it doesn’t.

As you drag the file, a little plus icon appears beneath the hand as if to tell you that something’s going to happen. Alas, however, this also has no effect.

And of course, dragging the file to the desktop similarly does not work:

Dragging the file to the desktop has no effect

So we can’t drag links from Firefox, and we can’t drag and drop files. Maybe we’ll have better luck with Chromium?

We try dragging a link from Chromium into the Documents folder
Once again, we get that “Drag and drop is not supported” failure.

So it seems, like someone hinted in that original question about drag and drop in Linux Mint, that this has nothing to do with the browser and is something related to the desktop environment.

Once again, I had to swallow that feeling of incompetence and ask for help with this. Aside from the usual Stack Overflow treatment of getting my question closed as a duplicate, one of the comments led to other Q&As that uncovered a bitter truth: that drag and drop support was intentionally removed. Why would anyone in their right state of mind do that?

Kubuntu 19.10

Incredulous, I decided to try the KDE flavour of Ubuntu — Kubuntu. Drag and drop a link from browser to desktop? No problem:

We drag the padlock icon next to the address bar to the desktop
A context menu appears, asking what we want to do with the URL. “Link Here” creates the equivalent of a desktop shortcut in Windows.
An icon is created on the desktop, leading to the webpage we wanted to keep track of.

Was that really so hard? I get it, there were reasons why GNOME decided to do away with desktop icons and the like. But surely there are better ways to solve the problem than to do away with a basic and essential usability feature.

A desktop environment without basic drag and drop support in… almost 2020… is just garbage.

Setting Up Elasticsearch on Linux Ubuntu

Elasticsearch is a lightning-fast and highly scalable search engine built on top of Apache Lucene. In this article, we’re going to see how we can quickly set it up on an Ubuntu Linux environment (using Ubuntu 16.10 here) to be able to play around with it. We do not cover configuring Elasticsearch or setting up a cluster. To set up Elasticsearch on Windows, see “Setting Up Elasticsearch and Kibana on Windows” instead.

Before we can set up Elasticsearch itself, we need Java. We can follow these instructions to set up Java on Ubuntu. Before proceeding, verify that the JAVA_HOME environment variable is set:

echo $JAVA_HOME

It is likely that you won’t see anything as a result of this command. That’s because while the Java setup instructions do set this environment variable, it does not get applied to your current session. Try opening a new terminal window or reboot the machine, and chances are that your JAVA_HOME will be set correctly. If not, you may have to set JAVA_HOME manually.

Once Java is correctly set up (complete with the JAVA_HOME environment variable), we can proceed to set up Elasticsearch. By going to the Elasticsearch downloads page, we can download (among other things) the Debian package containing Elasticsearch:

We can now install the Debian package using dpkg. At the time of writing this article, the latest version of Elasticsearch is 5.4, so after opening a Terminal window based in the Downloads folder, we can use the following command to install Elasticsearch:

sudo dpkg -i elasticsearch-5.4.0.deb

Elasticsearch is now installed, but it is not yet running! So first, we’ll enable the Elasticsearch service so that it will start automatically when the machine is rebooted:

sudo systemctl enable elasticsearch.service

We can now start the Elasticsearch service.

sudo systemctl start elasticsearch.service

The Elasticsearch HTTP endpoint will need a few seconds before it is reachable. After that, we can verify that Elasticsearch is running either by going to localhost:9200 from a web browser, or by hitting that same endpoint using curl in the command line:

curl -X GET http://localhost:9200/

In either case, you should get a response with some JSON data about the Elasticsearch instance you’re running:

We are now all set up to play around with Elasticsearch! Since we didn’t configure anything, we have a single instance with all default settings. If you’re planning to use Elasticsearch in a production environment, you will of course want to read up on configuring it properly and setting up a cluster to ensure that it can handle the use cases you need and that it can survive failure scenarios.

Setting up .NET Core on Linux

One of the biggest promises of .NET Core is the long-awaited promise of true cross-platform development. In this article, we’ll see how we can set up .NET Core on some flavours of Linux, and ensure that it works by running a simple console application.

Introduction

In general, if you want to run .NET Core on Linux, you should do the following before even starting development, to make sure it actually works:

  1. Install .NET Core itself.
  2. Create a simple .NET project.
  3. Build and run the application.

The steps to install .NET Core vary depending on the distribution you are using. Different distributions use different package managers (e.g. APT, RPM, YUM, DNF, etc) so you will often need to either add a .NET package source to your package manager’s configuration, or download binaries for .NET Core from Microsoft, before you can proceed to actually install .NET Core.

Microsoft’s Getting Started with .NET Core documentation lists a handful of supported Linux distributions, each with their own installation instructions. Unfortunately, this is not yet updated with the latest versions of several popular distributions. In fact, I have not been able to set up .NET Core in Ubuntu 17.04 (Zesty Zapus), Fedora 25, or CentOS 7. So in this article, we’ll focus on Ubuntu 16.10 (Yakkety Yak) and Linux Mint 18.1.

Unfortunately, these two are both Debian flavours, and both use the Ubuntu package server, so there is not much in the way of variety here.  In any case, let’s proceed with the setup.

Installing .NET Core on Linux Ubuntu 16.10 (Yakkety Yak)

First, we need to follow the installation instructions in the documentation in order to add the .NET package source to APT’s package source configuration:

sudo sh -c 'echo "deb [arch=amd64] https://apt-mo.trafficmanager.net/repos/dotnet-release/ yakkety main" > /etc/apt/sources.list.d/dotnetdev.list'
sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 417A0893
sudo apt-get update

Here’s what the output of most of this should look like:

With that done, we can install the .NET Core SDK:

sudo apt-get install dotnet-dev-1.0.1

Once the installation is complete, we can create and run a simple project. We can do this without writing any code ourselves, because the dotnet command provides means of generating project templates out of the box.

First, let’s create a directory for our application, and switch to it (note: the documentation provides an alternative way of doing this):

mkdir hello
cd hello

Then, we can create a simple “Hello World” console application in the current directory by running the following command:

dotnet new console

Then, with the following commands, we restore dependencies via NuGet, build the application, and run it:

dotnet restore
dotnet run

Here’s the output, so you can see that it actually worked:

Installing .NET Core on Linux Mint 18.1

The same documentation page with the instructions to install .NET Core on Ubuntu also covers Linux Mint 17. Unfortunately, this doesn’t work for Linux Mint 18. However, you’ll notice that Ubuntu 14.04 and Linux Mint 17 share the same setup instructions. And this Stack Overflow answer shows that Ubuntu 16.04 and Linux Mint 18 also use the same setup. Thus:

sudo sh -c 'echo "deb [arch=amd64] https://apt-mo.trafficmanager.net/repos/dotnet-release/ xenial main" > /etc/apt/sources.list.d/dotnetdev.list'

sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 417A0893

sudo apt-get update

Then, like before, we install the .NET Core SDK:

sudo apt-get install dotnet-dev-1.0.1

And then, we can actually test this out:

mkdir hello
cd hello
dotnet new console
dotnet restore
dotnet run

We get our “Hello World”, so it works!

Conclusion

We’ve seen how to set up .NET Core on the Ubuntu and Mint distributions of Linux, which are very similar. Different distributions have different setup instructions, and it would be a real pain to cover all of them. The official documentation does provide installation instructions for a handful of popular distributions, but they are slow to update documentation, and do not at this time cover the latest versions.

At least, however, this should be enough to get an idea of what it takes to set things up and run a simple application on Linux using .NET Core.

How to set up SDL2 on Linux

This article explains how to get started with SDL2 in Linux. For other SDL2 tutorials (including setting up on Windows), check out my SDL2 Tutorials at Programmer’s Ranch.

Using apt-get

If you’re on a Debian-based Linux distribution, you can use the APT package manager to easily install the SDL2 development libraries. From a terminal, install the libsdl2-dev package:

sudo apt-get install libsdl2-dev

Installing from source

If for whatever reason you can’t use a package manager, you’ll have to compile SDL2 from the source code. To do this, you first have to download SDL2:

sdl2linux-download

After extracting the archive to a folder, cd to that folder and run:

./configure

When it’s done, run:

make all

Finally, run:

sudo make install

Testing it out

To verify that you can compile an SDL2 program, use the following code (it’s the same used in my “SDL2: Setting up SDL2 in Visual Studio 2010” article at Programmer’s Ranch):

#include <SDL2/SDL.h>

int main(int argc, char ** argv)
{
    SDL_Init(SDL_INIT_EVERYTHING);
    SDL_Quit();

    return 0;
}

You can use vi or your favourite editor to create the source code:

sdl2linux-minimal

To compile this (assuming it’s called sdl2minimal.c), use the following command:

gcc sdl2minimal.c -lSDL2 -lSDL2main -o sdl2minimal

We need to link in the SDL2 libraries, which is why we add the -lSDL2 -lSDL2main. Be aware that those start with a lowercase L, not a 1. The program should compile. It won’t show you anything if you run it, but now you know that you’re all set up to write SDL2 programs on Linux.

sdl2linux-run