The Ninth Anniversary

Another year has gone by. Considering I’m supposed to be retired from tech blogging, and that I moved country this year, and wrote extremely detailed walkthroughs of Day of the Tentacle and Menzoberranzan, I’m surprised that I managed to write as much as I did.

Among the 26 articles I’ve published over the past year are:

Having stopped working with .NET has clearly opened up the way to more interesting things. Given that the three most popular articles in the last three months were written in the last year, I like to think that at least some of what I’ve written has appealed to a wide audience and is helping them to solve problems.

I’m not sure what the tenth year will be like for Gigi Labs, but I hope it will be fun!

GoLang: Using defer for Scope Bound Resource Management

Most programming languages today provide some way to bind the lifetime of a resource (for instance, a file handle) to a scope within a program, and implicitly clean it up when that scope ends. We can use destructors in C++, using blocks in C#, or with statements in Python. In Go, we use defer. As far as I can tell, this pattern originates in C++ and is called either Resource Acquisition Is Initialization (RAII), or Scope-Bound Resource Management (SBRM). It has various applications that range from resource deallocation (such as file handling, smart pointers, or locks) to scoped utility functions, as I’ve shown in my 2016 article, “Scope Bound Resource Management in C#“.

To understand what we’re talking about and why it’s useful, we first need to start with life without SBRM. Consider a simple program where we read the contents of a file:

package main

import (
	"fmt"
	"os"
	"time"
)

func main() {
	file, err := os.Open("example.txt") // open the file
	if err != nil {
		fmt.Println("Failed to open file!")
		return
	}

	data := make([]byte, 4)
	_, err = file.Read(data) // read 4 bytes from the file
	if err != nil {
		fmt.Println("Failed to read the file!")
		return
	}

	fmt.Println(string(data)) // print the data from the file

	file.Close() // close the file

	time.Sleep(time.Second) // pretend to do other work afterwards
}

Here, we’re opening a file, doing something with it, and then (like good citizens) closing it. We’re also doing something afterwards. However, this is in itself quite error-prone. Here are a few things that could happen which would jeopardise that Close() call:

  • We might forget to call Close() entirely.
  • An early return (e.g. due to an error) might skip the call to Close().
  • A more complex function with multiple branching might not reach Close() due to a mistake in the logic.

We can improve the situation by using defer. Putting the Close() call in a defer statement makes it run at the end of the function, whether there was an error or not. To illustrate, we’ll try to open a file that doesn’t exist, and use a Println() instead of the actual Close() to be able to see some output:

package main

import (
	"fmt"
	"os"
)

func main() {
	_, err := os.Open("nonexistent.txt") // open the file
	defer fmt.Println("Closing")
	if err != nil {
		fmt.Println("Failed to open file!")
		return
	}
}

Because the deferred statement runs at the end of the function in any case, we see “Closing” in the output:

Failed to open file!
Closing

defer is useful to ensure resources are cleaned up, but it’s not as good as SBRM constructs from other languages. One drawback is that there’s no actual requirement to use defer when allocating a resource, whereas something like C#’s using block ensures that anything allocated with it gets disposed at the end of its scope.

Another disadvantage is that it is function-scope only. Let’s imagine we have this program where we do a series of time-consuming tasks:

package main

import (
	"fmt"
	"time"
)

func main() {
	fmt.Println("Doing hard work...")

	time.Sleep(time.Second)

	fmt.Println("Doing harder work...")

	time.Sleep(2 * time.Second)

	fmt.Println("Doing even harder work...")

	time.Sleep(3 * time.Second)

	fmt.Println("Finished!")
}

We’d like to benchmark each step. In “Scope Bound Resource Management in C#“, I was able to wrap statements in a using block with a utility ScopedTimer that I created. Like C#, Go has blocks based on curly brackets, so let’s try and (ab)use them to measure the time taken by one of the tasks:

package main

import (
	"fmt"
	"time"
)

func main() {
	fmt.Println("Doing hard work...")

	{
		startTime := time.Now()
		defer fmt.Printf("Hard work took %s", time.Since(startTime))
		time.Sleep(time.Second)
	}

	fmt.Println("Doing harder work...")

	time.Sleep(2 * time.Second)

	fmt.Println("Doing even harder work...")

	time.Sleep(3 * time.Second)

	fmt.Println("Finished!")
}

The output is:

Doing hard work...
Doing harder work...
Doing even harder work...
Finished!
Hard work took 148ns

Two things went wrong here:

  • The benchmark measured 148 nanoseconds for a task that took one second! That’s because the time.Since(startTime) got evaluated right away rather than when defer started executing its statement.
  • The benchmark for the first task only got printed at the end of the entire function. That’s because defer runs at the end of the function, not at the end of the current scope.

We can fix the first problem by wrapping the deferred statement in an anonymous function that executes itself (which in the JavaScript world would be called an Immediately Invoked Function Expression or IIFE):

// ...

	{
		startTime := time.Now()
		defer func() {
			fmt.Printf("Hard work took %s", time.Since(startTime))
		}()
		time.Sleep(time.Second)
	}

// ...

We now get a benchmark of about 6 seconds, simply because defer is still running at the end of the function:

Doing hard work...
Doing harder work...
Doing even harder work...
Finished!
Hard work took 6.002184229s

To fix this benchmark, we have to fix the second problem, which is that defer runs at the end of the function. What we want is to use defer to measure the duration of each task inside the function. We have a number of ways to do this, but since defer is function scoped, they all involve the use of functions.

The first option is to break up main() into separate functions for each task:

package main

import (
	"fmt"
	"time"
)

func runTask1() {
	startTime := time.Now()
	defer func() {
		fmt.Printf("Hard work took %s\n", time.Since(startTime))
	}()
	time.Sleep(time.Second)
}

func runTask2() {
	startTime := time.Now()
	defer func() {
		fmt.Printf("Harder work took %s\n", time.Since(startTime))
	}()
	time.Sleep(2 * time.Second)
}

func runTask3() {
	startTime := time.Now()
	defer func() {
		fmt.Printf("Even harder work took %s\n", time.Since(startTime))
	}()
	time.Sleep(3 * time.Second)
}

func main() {
	fmt.Println("Doing hard work...")

	runTask1()

	fmt.Println("Doing harder work...")

	runTask2()

	fmt.Println("Doing even harder work...")

	runTask3()

	fmt.Println("Finished!")
}

This does produce correct results:

Doing hard work...
Hard work took 1.000149001s
Doing harder work...
Harder work took 2.001123261s
Doing even harder work...
Even harder work took 3.000039148s
Finished!

However:

  • It is quite verbose.
  • It duplicates all the benchmarking logic.
  • Although many people advocate for smaller functions, I find it easier to read longer functions if the operations are sequential and there’s no duplication, rather than hopping across several functions to understand the logic.

Another way we could do this is by retaining the original structure of main(), but using IIFEs instead of curly brackets to delineate the scope of each task:

package main

import (
	"fmt"
	"time"
)

func main() {
	fmt.Println("Doing hard work...")

	func() {
		startTime := time.Now()
		defer func() {
			fmt.Printf("Hard work took %s\n", time.Since(startTime))
		}()
		time.Sleep(time.Second)
	}()

	fmt.Println("Doing harder work...")

	func() {
		startTime := time.Now()
		defer func() {
			fmt.Printf("Harder work took %s\n", time.Since(startTime))
		}()
		time.Sleep(2 * time.Second)
	}()

	fmt.Println("Doing even harder work...")

	func() {
		startTime := time.Now()
		defer func() {
			fmt.Printf("Even harder work took %s\n", time.Since(startTime))
		}()
		time.Sleep(3 * time.Second)
	}()

	fmt.Println("Finished!")
}

It works just as well:

Doing hard work...
Hard work took 1.000069185s
Doing harder work...
Harder work took 2.001031904s
Doing even harder work...
Even harder work took 3.001086566s
Finished!

This approach is interesting because we actually managed to create scopes inside a function where defer could operate. All we did was put each task and its respective benchmarking logic inside an anonymous function and execute it right away. So the sequential code works just the same whether this anonymous function is there or not; it only makes a difference for defer.

Of course, we are still duplicating code in a very uncivilised way here, so we’ll move onto the third approach, which is simply to implement the benchmarking logic in a helper function and use it to execute the task itself:

package main

import (
	"fmt"
	"time"
)

func runBenchmarked(actionName string, doAction func()) {
	fmt.Printf("Doing %s...\n", actionName)
	startTime := time.Now()
	defer func() {
		fmt.Printf("%s took %s\n", actionName, time.Since(startTime))
	}()
	doAction()
}

func main() {
	runBenchmarked("hard work", func() {
		time.Sleep(time.Second)
	})

	runBenchmarked("harder work", func() {
		time.Sleep(2 * time.Second)
	})

	runBenchmarked("even harder work", func() {
		time.Sleep(3 * time.Second)
	})

	fmt.Println("Finished!")
}

The runBenchmarked() function takes care of everything about each task: it prints a message when it’s about to start, executes the task itself, and prints the time it took using the same defer statement we’ve been using for benchmarking. To do this, it takes the name of the task (as a string) as well as the task itself (as a callback function).

Then, in main(), all we need to do is call runBenchmarked() and pass the name of the task and the task itself. This results in the code being brief, free of duplication, and nicely scoped, which I believe is the closest we can get in Go to the SBRM constructs of other languages. The output shows that this works just as well:

Doing hard work...
hard work took 1.000901126s
Doing harder work...
harder work took 2.001077689s
Doing even harder work...
even harder work took 3.001477287s
Finished!

Conclusion

defer in Go provides some degree of SBRM support for scoped cleanup or utility purposes. However, it suffers from the following drawbacks:

  • It does not enforce implicit cleanup of allocated resources as similar constructs in other languages do.
  • Any parameters that need to be evaluated at the end of the scope should be wrapped in an IIFE.
  • It is function-scoped, therefore using defer for a limited/block scope inside a function requires it to be wrapped in another function.

Go To Line Number in Visual Studio Code

If you want to go to a specific line number in a file, there are a couple of ways to do that with Visual Studio Code, depending on whether you’re already in that file or not.

Same File

Go To Line in the same file.

If you’ve got a file already open and want to hop to a specific line number in it, just use the handy Go To Line shortcut – that is Ctrl+G (Windows/Linux) or Control+G (Mac). Then you just enter the line number in the prompt.

Different File

Going to a specific line in another file.

If you want to go to a specific line in another file, i.e. one you don’t already have open in front of you, there are a couple of things you can do.

The first is to open that file and use the same “Go To Line” in the previous section. There are several ways to open the file, but the quickest is probably to use the handy Ctrl+P (Windows/Linux) or Command+P (Mac) to search for the filename.

The second way is to enter the line number along with the filename in the same Ctrl+P/Command+P prompt! To do this, type the filename in full, followed by a colon (:), and then the line number. That will open the file at the specified line number.

Getting Started with Rust using VS Code

I’m not yet familiar with Rust, but now is as good a time as any other to start learning it. A couple of 2020 articles, “What is Rust and why is it so popular?” at the Stack Overflow Blog and “Why Discord is switching from Go to Rust” by Discord mention a few aspects that make Rust appealing to developers.

Usually, the first thing I do when starting to work with a new programming language is to figure out how to debug it. So in this article I’m going to show you how to get yourself set up to debug Rust using Visual Studio Code (VS Code), and then you can use that as a starting point and take your learning journey in any direction that is comfortable for you.

Installing Rust

The first thing you need is to install the Rust language itself and associated tools. Simply follow the Installation part of the Rust book to set up Rust for your operating system.

The cargo tool which comes with the Rust installation is vital as it is used for various things from building and running Rust source code to managing packages (“crates”, in the Rust ecosystem).

Setting Up VS Code for Rust

If you don’t already have VS Code, head to its website and download it.

Then, you’ll need to install two extensions, based on the IDE Integration Using rust-analyzer section of the Rust book, and the Debugging section of the Rust with Visual Studio Code documentation:

  • rust-analyzer gives you most of the IDE integration you need between Rust and VS Code (e.g. Intellisense)
  • Either the Microsoft C/C++ extension (if you’re on Windows) or CodeLLDB (for Linux/Mac) – this gives you the ability to actually debug Rust code
Add the rust-analyzer extension from the Extensions tab in VS Code for basic IDE support.

Creating a New Rust Project

Use cargo to create a new Rust project as follows:

$ cargo new rust1
     Created binary (application) `rust1` package

Then, open the newly created rust1 folder using either the terminal (as follows) or VS Code’s File -> Open Folder… menu option.

$ cd rust1
$ code .

Note that if you’re following the “Hello, World!” part of the Rust book and created a Rust program without cargo, you won’t be able to debug it.

Debugging Rust with VS Code

You’ll find a “Hello world” program in src/main.rs under the folder you created (in this case rust1). Ensure you have that open in VS Code and the press F5. When prompted, select LLDB as the debugger to use.

After pressing F5, select the LLDB debugger.

At that point you get an error because you don’t have a launch configuration yet:

“Cannot start debugging because no launch configuration has been provided.”

But that’s alright, because once you click “OK”, you’re offered the possibility to have a launch configuration generated for you, which you should gratefully accept:

“Cargo.toml has been detected in this workspace. Would you like to generate launch configurations for its targets?” Click “Yes”.

When you click “Yes”, a file with launch configurations is generated for you:

Launch configurations are generated for you.

See also my earlier article, “Working with VS Code Launch Configurations“, if you want to do more with these launch configurations in future.

Now you can switch back to main.rs and press F5 to debug the code. Click next to a line number to set a breakpoint if you want to stop anywhere. Try also hovering over parts of the code to get rich information about them.

Debugging Rust code: stopping at a breakpoint, and analysing the println macro.

Summary

If you prefer to debug Rust code in an IDE than do everything in the terminal, consider using VS Code. After installing both Rust and VS Code, all you need to do is install a couple of extensions, create a Rust project with cargo, and generate launch configurations for it. Then you can debug your Rust code and benefit from all the IDE comfort (e.g. Intellisense) you’re used to in other languages.