GoLang Timestamps: A Cross-Platform Nightmare

Have you ever had one of those bugs where your tests failed in your Continuous Integration (CI) pipeline, but they worked just fine locally, and the only way to troubleshoot the problem was to add logs and wait significant amounts of time for each run to fail? Well, I had one of those this week, and I learned a thing or two in the process. I’m hoping to share that here for anyone else who might run into the same problem.

Time Precision Across Operating Systems

Let’s create a really simple Go program.

  1. Create a new folder.
  2. In that folder, run go mod init main
  3. Create a file called main.go and paste the following code:
package main

import (

func main() {

Nothing special, right? It just prints out the current time. Let’s run it then.

The following is the output on my personal laptop running Linux:

$ go run main.go
2022-10-27 17:34:43.471537065 +0200 CEST m=+0.000016185

And the following is the output on my work machine, which is a Mac:

$ go run main.go
2022-10-27 17:35:27.349114 +0200 CEST m=+0.000181459

See the difference? The Linux output was precise to the nanosecond (9 fractional digits of a second), whereas the Mac output was precise to the microsecond (6 fractional digits of a second). This subtle difference was the root cause of my problem.

How Did It Happen?

I had written automated tests to verify some simple CRUD functionality. I would send the data to an API, and the API would store it in a PostgreSQL database. Then, I would retrieve the data and verify that it matched what I had sent. Sounds trivial? Yes, but it failed. It failed on the CI server where I couldn’t debug the problem as easily as I could on my machine.

The reason it worked on my Mac was that a timestamp field in PostgreSQL has microsecond precision. Coincidentally, that’s the same precision that I get for Go time on my Mac.

However, on the CI system as well as on a colleague’s WSL system, things worked differently:

  1. The test data, as well as the expected values to be compared later, are generated in Go. Timestamps have nanosecond precision.
  2. The data is sent via the API and stored in the database. There’s a loss of precision and the timestamps are stored with microsecond precision.
  3. The test retrieves, via the API, the same data that has just been saved. The timestamps come back with microsecond precision.
  4. The automated test compares the expected values (nanosecond precision) with the actual values (microsecond precision) and finds that they are different. The test fails.

How Did I Resolve It?

Honestly, I don’t know what the right way to solve this problem is. It’s a pretty messy situation reminiscent of floating-point error, and that in fact influenced my solution.

The first thing I did was strip off the extra precision. Go time has this Truncate() function you can use for exactly this reason. For instance, let’s try the following:

func main() {

The output on my Linux laptop then looks like:

$ go run main.go
2022-10-27 19:06:52.427287572 +0200 CEST m=+0.000013810
2022-10-27 19:06:52.427347 +0200 CEST

Doing this on my Mac would, I suppose, mean that the Truncate() is a no-op and just returns the same thing – at least that’s the behaviour I observed when testing it out.

However, in practice, Truncate() alone didn’t solve the problem. Some tests kept failing intermittently because the last digit (the microsecond) was off by one. I’m not sure what was causing this – perhaps some kind of rounding, e.g. when the data goes into the database.

So I used the same approach as with floating-point equality comparisons and decided to allow an error of one microsecond, i.e. if two timestamps are one microsecond apart, we assume they’re still equal. This is as simple as using Sub() to calculate the interval between two timestamps, and consider them equal if it’s not greater than one microsecond:

func timestampsEqual(a, b time.Time) bool {
	return b.Sub(a) <= time.Microsecond

Let’s test it out with some code:

func main() {
	t1 := time.Now().Truncate(time.Microsecond)
	t2 := t1.Add(time.Microsecond)
	t3 := t2.Add(time.Microsecond)

	fmt.Println("t1 == t2: ", timestampsEqual(t1, t2))
	fmt.Println("t1 == t3: ", timestampsEqual(t1, t3))

And the output would be:

$ go run main.go
t1 == t2:  true
t1 == t3:  false

So, it’s not great, and I’m sure there are ways to improve, but it seems to have solved my problem at least. One thing I’d like to do is to use some equivalent of math.abs() so that the order of the timestamps doesn’t matter. But, I have no idea how to do that with Go time with out-of-the-box functionality.

Leave a Reply

Your email address will not be published. Required fields are marked *