Category Archives: Software

Filling Go Structs in VS Code

If you program in Go, then you work with structs all the time. There’s a handy little tool in VS Code that you can use to quickly populate an empty struct with all its fields instead of writing them by hand. With your cursor over the name of the struct, bring up the context menu by doing one of the following:

  • Clicking on the light bulb to the left
  • Pressing Ctrl+. (Windows/Linux)
  • Pressing Cmd+. (Max)
This context menu lets you fill a struct’s fields.

Then, click on the “Fill” option or press ENTER to accept it, and the struct’s fields will be added along with default values for each according to their type:

The struct’s fields have been added with default values.

This little productivity tool is great, especially when you’re mapping data across structs in different parts of your application.

Go To Line Number in Visual Studio Code

If you want to go to a specific line number in a file, there are a couple of ways to do that with Visual Studio Code, depending on whether you’re already in that file or not.

Same File

Go To Line in the same file.

If you’ve got a file already open and want to hop to a specific line number in it, just use the handy Go To Line shortcut – that is Ctrl+G (Windows/Linux) or Control+G (Mac). Then you just enter the line number in the prompt.

Different File

Going to a specific line in another file.

If you want to go to a specific line in another file, i.e. one you don’t already have open in front of you, there are a couple of things you can do.

The first is to open that file and use the same “Go To Line” in the previous section. There are several ways to open the file, but the quickest is probably to use the handy Ctrl+P (Windows/Linux) or Command+P (Mac) to search for the filename.

The second way is to enter the line number along with the filename in the same Ctrl+P/Command+P prompt! To do this, type the filename in full, followed by a colon (:), and then the line number. That will open the file at the specified line number.

Migrating Cartography to Memgraph

I’ve recently written about how to use Cartography to collect infrastructural data from both AWS and Okta into a Neo4j graph database for security analysis.

Neo4j is a long-standing player in the graph database market, with a robust product, great documentation, and a massive following. However, its long legacy is in a way also a disadvantage, as it can be costly, slow, and resource-hungry (due in no small part to its reliance on the JVM). Sometimes people would like to use an alternative for any of these reasons.

Memgraph, on the other hand, is a relatively young graph database, and certainly not as fully-featured as Neo4j. A key difference is that it is written in C++, meaning it’s designed to be faster and more lightweight than Neo4j (whether it lives up to this is something you’ll need to evaluate for your own use cases). Memgraph also made a very wise decision to support the Bolt protocol and the Cypher language – both of which Neo4j uses – meaning that it’s compatible with existing Neo4j clients and queries. Although there are variations in Cypher dialect, the incompatibilities are few, and moving from Neo4j to Memgraph is significantly less painful than, say, transitioning to a graph database that uses Gremlin as its query language.

At the time of writing this article, Cartography requires Neo4j 4.x, and does not work with Memgraph. However, I’m going to show you how to make at least part of it (the Okta intel module) work with minor alterations to the Cartography codebase. This serves as a demonstration of how to get started migrating an existing application from Neo4j to Memgraph.

Running Memgraph

Before we start looking at Cartography, let’s run an instance of Memgraph. To do this, we’ll take a tip from my earlier article, “Using the Neo4j Bolt Driver for Python with Memgraph“, and run it under Docker as follows (drop the sudo if you’re on Mac or Windows):

sudo docker run --rm -it -p 7687:7687 -p 3000:3000 -e MEMGRAPH="--bolt-server-name-for-init=Neo4j/" memgraph/memgraph-platform

That --bolt-server-name-for-init=Neo4j/ is a first critical step in Neo4j compatibility. As explained in that same article, the Neo4j Bolt Driver (i.e. client) for Python (which Cartography uses) checks whether the server sends an “agent” value that starts with “Neo4j/”. By setting this, Memgraph is effectively posing as a Neo4j server, and the Neo4j Bolt Driver for Python can’t tell the difference.

Update 19th September 2023: as of Memgraph v2.11, --bolt-server-name-for-init has a default value compatible with the Neo4j Bolt Driver, and therefore no longer needs to be provided.

If it’s successful, you should see output such as the following:

Memgraph is running. You can also execute queries directly from here.

Cloning the Cartography Repo

The next thing to do is grab a copy of the Cartography source code from the Cartography GitHub repo:

git clone https://github.com/lyft/cartography.git

Next, run the following command to install the necessary dependencies:

pip3 install -e .

Note: in the past, I’ve usually had to upgrade the Neo4j Bolt Driver for Python to 5.2.1 to get anything working, but as I try this again, it seems to work even with the default 4.4.x that Cartography uses. If you have problems, try changing setup.py to require neo4j>=5.2.1 and run the above command again.

Creating a Launch Configuration in Visual Studio Code

In order to run Cartography from its source code, you could run it directly from the terminal, for instance:

cd cartography/cartography
python3 __main__.py

However, as I’ve recently been using Visual Studio Code for all my polyglot software development needs, I find it much more convenient to set up a launch configuration that allows me to easily debug Cartography and pass whatever command-line arguments and environment variables I want.

The following launch.json is handy to run Cartography with an Okta configuration as described in “Getting Started with Cartography for Okta“:

{
    "version": "0.2.0",
    "configurations": [
        {
            "name": "Run Cartography",
            "type": "python",
            "request": "launch",
            "program": "cartography/__main__.py",
            "console": "integratedTerminal",
            "justMyCode": true,
            "args": [
                "--neo4j-user",
                "ignore",
                "--neo4j-password-env-var",
                "NEO4J_PASS",
                "--okta-org-id",
                "dev-xxxxxxxx",
                "--okta-api-key-env-var",
                "OKTA_API_TOKEN"
            ],
            "env": {
                "NEO4J_PASS": "ignore",
                "OKTA_API_TOKEN": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
            }
        }
    ]
}

You might notice we’re telling Cartography to connect to Neo4j (Memgraph actually) with username and password both set to “ignore”. The reason for this is that while the Community Edition of Memgraph does not require (or support) authentication, the Neo4j Bolt Driver for Python (i.e. Neo4j client) does require a username and password to be provided. So, as a second critical compatibility step, we pass any arbitrary value for the Neo4j username and password so long as they are not left empty.

As for the Okta configuration, remember to replace the Organisation ID and API Token with real ones.

Incompatible Index Creation Cypher

Pressing F5, we can now run Cartography from inside Visual Studio Code, and we immediately run into the first problem:

The error says: “no viable alternative at input ‘CREATEINDEXIF'”.

Memgraph is choking on the index creation step in indexes.cypher (in VS Code, use Ctrl+P / Command+P to quickly locate the file) because the index creation syntax is one aspect of Memgraph’s Cypher implementation that is not compatible with that of Neo4j. If we take the first line in the file, the Neo4j-compatible syntax is:

CREATE INDEX IF NOT EXISTS FOR (n:AWSConfigurationRecorder) ON (n.id);

…whereas the equivalent on Memgraph would be:

CREATE INDEX ON :AWSConfigurationRecorder(id);

Note: in Memgraph, the “IF NOT EXISTS” bit is implicit: an index is created if it doesn’t exist; if it does, the operation is a no-op that does not cause any error.

Fortunately, this syntactic difference is easily resolved by replacing (using VS Code search & replace syntax, with regex enabled) this:

CREATE INDEX IF NOT EXISTS FOR \(n:(.*?)\) ON \(n.(.*?)\);

…with this:

CREATE INDEX ON :$1($2);

Tip: although not in scope here, you’ll need to make a similar change also in querybuilder.py and tx.py if you also want to get other intel modules (e.g. AWS) working.

Neo4j Result Consumption

After fixing the index creation syntax and rerunning Cartography, we run into another problem:

The error says: “The result is out of scope. The associated transaction has been closed. Results can only be used while the transaction is open.”

I’m told that consume() is used to fix a problem in which Neo4j connections hang in situations where internal buffers fill up, although the Cartography team is re-evaluating whether this is necessary. In practice, I have seen that removing this doesn’t seem to cause problems with datasets I’ve tested with, although your mileage may vary. Let’s fix this problem by removing usage of consume() in statement.py.

First, we drop the .consume() at the end of line 76 inside the run() function:

    def run(self, session: neo4j.Session) -> None:
        """
        Run the statement. This will execute the query against the graph.
        """
        if self.iterative:
            self._run_iterative(session)
        else:
            session.write_transaction(self._run_noniterative)
        logger.info(f"Completed {self.parent_job_name} statement #{self.parent_job_sequence_num}")

Then, in the _run_iterative() function, we remove the entire while loop (lines 120-128) except for line 121, which we de-indent:

        # while True:
        result: neo4j.Result = session.write_transaction(self._run_noniterative)

            # Exit if we have finished processing all items
            # if not result.consume().counters.contains_updates:
            #     # Ensure network buffers are cleared
            #     result.consume()
            #     break
            # result.consume()

When we run it again, it should finish the run without problems and return control of the terminal with the prompt showing:

...
INFO:cartography.sync:Finishing sync stage 'duo'
INFO:cartography.sync:Starting sync stage 'analysis'
INFO:cartography.intel.analysis:Skipping analysis because no job path was provided.
INFO:cartography.sync:Finishing sync stage 'analysis'
INFO:cartography.sync:Finishing sync with update tag '1689401212'
daniel@andromeda:~/git/cartography$

Querying the Graph

The terminal we’re using to run Memgraph has the mgconsole client running (that’s the memgraph> prompt you see in the earlier screenshot), meaning we can try running queries directly there. For starters, we can try the ubiquitous “get everything” Cypher query:

memgraph> match (n) return n;

Note: if you get a “mg_raw_transport_send: Broken pipe”, just run the query again and it should reconnect.

This gives us some data back:

Querying Memgraph using mgconsole.

As you can see, this is not great to visualise results. Fortunately, Memgraph has its own web client (similar to Neo4j Browser) called Memgraph Lab, that you can access on http://localhost:3000/:

Memgraph Lab: Quick Connect page.

On the Quick Connect page, click the “Connect now” button. Then, switch to the “Query Execution” page using the left navigation sidebar, and you can run queries and view results more comfortably:

Seeing some nodes in Memgraph Lab.

Unlike Neo4j Browser, Memgraph Lab does not return relationships by default when you run this query. If you want to see them as well, you can run this instead:

match (a)-[r]->(b)
return a, r, b
Nodes and relationships in Memgraph Lab.

If the graph looks too cluttered, just drag the nodes around to rearrange them in a way that is more pleasant.

More Cartography with Memgraph

Cartography is a huge project that gathers data from a variety of data sources including AWS, Azure, GitHub, Okta, and others.

I’ve intentionally only covered the Okta intel module in this article because it’s small in scope and easy to digest. To use Cartography with other data sources, additional effort is required to address other problems with incompatible Cypher queries. For instance, at the time of writing this article, there are at least 9 outstanding issues that need to be fixed before Cartography can be used with Memgraph for AWS (that’s quite impressive considering that the AWS intel module is the biggest). Other intel modules may have other problems that need solving; nobody has explored them with Memgraph yet.

Summary

In this article, I’ve shown how one could go about taking an existing application that depends on Neo4j and migrating it to Memgraph. I’ve used Cartography with its Okta intel module to keep things relatively straightforward. The steps involved include:

  1. Running Memgraph with --bolt-server-name-for-init=Neo4j/
  2. Using the same Bolt-compatible Neo4j client, providing arbitrary Neo4j username and password values
  3. Fixing any incompatible Neo4j client code (in this case, consume()), if applicable
  4. Adjusting any incompatible Cypher queries

Getting Started with Cartography for Okta

Cartography is a great security tool that gathers infrastructure and security data from various sources for subsequent analysis. Last year, I wrote an article about Getting Started with Cartography for AWS. Although Cartography focuses mostly on AWS, it also gathers data from several other sources including major cloud and SaaS providers.

In this article, we’ll use Cartography to ingest Okta data. For the unfamiliar, Okta is an enterprise identity management tool that is great for its Single Sign On (SSO) capability. From a single dashboard, it provides seamless access to many different services (e.g. AWS, Gmail, and many others), without having to login every time. See also: What is Okta and What Does Okta Do?

It’s worth noting before we start this journey that Cartography’s support for Okta isn’t great. It only supports a handful of types, and it uses a retired version of the Okta SDK for Python. Nonetheless, it retrieves the most important types, and they enable analysis of some more interesting attack paths (e.g. an Okta user gaining unauthorised access to resources in AWS).

Creating an Okta Developer Account

We’ll first need an Okta account. There are a few different options including a trial, but for development, the best is to sign up for an Okta Developer account as follows.

Click on the Sign up button in the top-right.
In this confusing selection screen, go for the Developer Edition on the right.
Fill the sign-up form and proceed.

Once you get to the sign-up form, fill in the four required fields, and then either sign-up via email or use your GitHub or Google account. Note that Okta demands a “business email”, so you can’t use a Gmail account for this.

After signing up, you’ll get an email to activate your account. Follow its instructions to choose a password, and then you will be logged in and redirected to your Okta dashboard.

The Okta dashboard.

Creating an Okta API Token

Cartography’s Okta Configuration documentation says it’s necessary to set up an Okta API token, so let’s do that. From the Okta Dashboard:

  1. Go to Security -> API via the left navigation menu.
  2. Switch to the “Tokens” tab.
  3. Click the “Create token” button.
Security -> API, Tokens tab, Create token button.

You will then be prompted to enter a name for the API token, and subsequently given the token itself. Copy the token and keep it handy. Take note also of your organisation ID, which you can find either in the URL, or in the top-right under your name (but remove the “okta-” prefix). The organisation ID for a developer account looks like “dev-12345678”.

Running Neo4j

Before we run Cartography, we need a running instance of the Neo4j graph database, because that’s where the data gets stored after being retrieved from the configured data sources (in this case Okta). When I wrote “Getting Started with Cartography for AWS“, Cartography only supported up to Neo4j 3.5. Thankfully, that has changed. The Cartography Installation documentation specifically asks for Neo4j 4.x, further remarking that “Neo4j 5.x will probably work but Cartography does not explicitly support it yet.” The latest Neo4j Docker image at the time of writing this article seems to be 5.9, and I’m feeling adventurous, so let’s give it a try.

I did explain in “Getting Started with Cartography for AWS” how to run Neo4j under Docker, but we’ll do it a little better this time. Use the following command:

sudo docker run --rm -p 7474:7474 -p 7473:7473 -p 7687:7687 -e NEO4J_AUTH=neo4j/password neo4j:5.9

Here’s a brief explanation of what all this means:

  • sudo: I’m on Linux, so I need to run Docker with elevated privileges. If you’re on Windows or Mac, omit this.
  • docker run: runs a new Docker container with the image specified at the end.
  • --rm: destroys the container after you shut it down. This is because we’re just doing a quick test and don’t want to keep containers around. If you want to keep the container, remove this.
  • -p 7474:7474 -p 7473:7473 -p 7687:7687: maps ports 7473, 7474 and 7687 from the Docker container to the host, so that we can access Neo4j from the host machine. 7474 in particular lets us access the Neo4j Browser, which we’ll see in a moment.
  • -e NEO4J_AUTH=neo4j/password: sets up the initial username and password to “neo4j” and “password” respectively. This bypasses the need to reset the password from the Neo4j Browser as I did in the earlier article. Remember it’s just a quick test, so excuse the silly “password” and choose a better one in production.
  • neo4j:5.9: This is the image we’re going to run – neo4j with tag 5.9.
  • Note that any data will be lost when you stop the container, regardless of the --rm argument. You’ll need to use Docker volumes if you want to retain the data.

Once the container has started, you can access the Neo4j Browser at http://localhost:7474/, and login using the username “neo4j” and password “password”. We’ll use this later to run Cypher queries, but for now it is a sign that Neo4j is running properly.

The Neo4j Browser’s login screen.

Running Cartography

Following the Cartography Installation documentation, run the following to install Cartography:

pip3 install cartography

As per Cartography’s Okta Configuration documentation, assign the Okta API token you created earlier to an environment variable (the following will set it only for your current terminal session):

export OKTA_API_TOKEN=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

Then, run Cartography with the following command:

cartography --neo4j-uri bolt://localhost:7687 --neo4j-password-prompt --neo4j-user neo4j --okta-org-id dev-xxxxxxxx --okta-api-key-env-var OKTA_API_TOKEN

Here’s a brief summary of the parameters:

  • --neo4j-uri bolt://localhost:7687: specifies the Neo4j URI to connect to
  • --neo4j-user neo4j: will login with the username “neo4j”
  • --neo4j-password-prompt: means that you will be prompted for the Neo4j password and will have to type it in
  • --okta-org-id dev-xxxxxxxx: will connect to Okta using the organisation ID “dev-xxxxxxxx” (replace this with yours)
  • --okta-api-key-env-var OKTA_API_TOKEN: will use the value of the OKTA_API_TOKEN environment variable as the API token when connecting to Okta

If you see “cartography: command not found” when you run this (especially on Linux), there’s a very good Stack Overflow answer that explains why this happens and offers a simple solution:

export PATH="$HOME/.local/bin:$PATH"

When you manage to run Cartography with the earlier command, enter the Neo4j password (it’s “password” in this example). It will take some time to collect the data from Okta and will write to the terminal periodically as it makes progress. You’ll know it’s done because you’ll see your terminal’s prompt again, and hopefully won’t see any errors.

Querying the Graph

You should now have data in Neo4j, so open your Neo4j Browser at http://localhost:7474/ and run some queries to look at the data. The easiest to start with is the typical “get everything” query:

match (n) return n

On a fresh new account, this gives you back a handful of nodes and the relationships between them:

Okta data in the Neo4j Browser.

Although this is not great for analysis, it’s all you need to get started using Cartography for Okta. You can get more data to play with by either building out your directory (users, groups, etc) via the Okta Dashboard, or else connecting to a real production account with real data.

If you want to analyse attack paths from Okta to AWS, then do the necessary AWS setup (see my earlier article, “Getting Started with Cartography for AWS“), and follow Cartography’s Okta Configuration documentation to set up the bridge between Okta and AWS.

Summary

To get Cartography to collect your Okta data:

  1. Sign up for an Okta account if you don’t have one already.
  2. Create an Okta API Token, and take note of your Okta Organisation ID
  3. Run Neo4j
  4. Run Cartography, providing settings to access Neo4j and Okta

Once the data is in Neo4j, you can analyse it and visualise how the nodes are connected. This can help you understand the paths that an attacker could take to breach the critical parts of your infrastructure. In the case of Okta, this is particularly useful when considering how an attacker could exploit the privileges of an Okta user to access resources in other cloud or SaaS providers.

The Wonderful World of Barcodes

There are lots of different ways to encode data in software. For instance, text encodings such as ASCII or UTF-8 allow us to represent strings of arbitrarily complex characters, while base64 allows us to represent even binary data as text and transmit it over channels that are more suited to deal with text (such as many internet protocols).

A barcode scanner next to a stack of books, each with its own barcode.

Barcodes, on the other hand, allow us to label physical objects and read them efficiently, providing a bridge between software and the physical world.

Although barcodes could represent any data, they are usually used to represent numbers. They are most commonly used in retail for fast processing of items being sold, especially by supermarkets, but also by pharmacies, bookshops, electronics stores, and other shops.

Many items we buy, from everyday groceries to books, are labelled with numbers (which I’ll call “product codes”) that form part of one or more standards, such as EAN-13, UPC-A, or ISBN. These numbers are encoded as a barcode on the product itself or its packaging. Several websites exist where you can look up these numbers. Other products, such as fruit, do not have standard identifiers, but that doesn’t prevent shops from applying their own.

Barcode Scanners

Barcode scanners can be used to read these product codes. They are quite cheap and can connect via USB to any system, making them widely compatible with Windows, Linux, MacOS, and various cash registers. A barcode scanner is held like a gun and, when the trigger is pulled, it emits a red light that reads the information on the barcode. This is immediately written to any text field that currently has focus, followed by a newline. For instance, if I open a text editor and quickly scan the barcodes on the four books shown in the above picture, I get the following:

9780201633542
9780201633467
9780201433074
9780201633610

Scanning these numbers into a text file is useful for testing or to later bulk-process them, but we can also use this handy input method in other ways. For instance, if we open ISBN Search and click on the search field, scanning a book’s barcode will not only populate the search field with the ISBN number, but also execute the search:

The ISBN Search homepage.
ISBN Search result for 9780201633542 is “TCP/IP Illustrated: The Implementation, Vol. 2”.

Barcodes in Retail

Shops that sell products with barcodes on them can use those barcodes as primary keys to identify each product. This makes it very easy to create a database with information about the various products (including name, price, taxes, discounts, etc) and then have the cashier look this up by just entering the barcode. We can simulate this by writing a simple program, e.g. in Python.

First, we define a simple mapping between each product’s barcode and its price:

from decimal import Decimal

prices = {
    "9780201633542": Decimal(35.75),
    "9780201633467": Decimal(28.90),
    "9780201433074": Decimal(42.30),
    "9780201633610": Decimal(0.50)
} 

Then, we write a simple loop that reads product codes one by one. When an empty input is received, it means there are no more products, and the total price is calculated:

total_price = Decimal(0)

while True:
    product_code = input("Product code: ")

    if product_code == "":
        break
    else:
        price = prices.get(product_code)
        if price == None:
            print('Unknown product code')
        else:
            total_price += price

print(f'Total: {total_price:.2f}')

Running this, we can scan a couple of barcodes and process a sale in a jiffy:

Product code: 9780201633542
Product code: 9780201633467
Product code: 
Total: 64.65

Building a Book Catalogue

Just like a shop can build a database by scanning barcodes and enhancing them with additional information, anyone can. In fact, you can find several online databases where you can look up barcodes, such as the aforementioned ISBN Search, EAN-Search, Barcode Lookup, and others. Some even offer APIs so that you can automate the lookup and build a fully-featured database very quickly by just scanning a bunch of barcodes.

Scanning a book’s ISBN number with the Handy Library mobile app.

If you run a library or are just a private individual with a lot of books, you can use this approach to build a catalogue of your books. The easiest way is to use the Handy Library mobile app, which allows you to scan a book’s ISBN number and through it acquire all the book’s metadata too. If you prefer to have your data on a computer, you can export data from Handy Library, or else build your database yourself by scanning barcodes and looking up the information afterwards.

QR Codes

Barcodes may be great in retail, but most people don’t carry barcode scanners with them wherever they go. Now that everybody and his dog has a smartphone with a powerful camera, QR codes have emerged as a more recent standard to encode data that smartphones can easily scan with their default camera app. The most common use case is to encode a URL, allowing people to visit it by scanning it with their phone’s camera rather than typing it in. The following are a few creative uses I’ve come across in my travels:

  • Exhibitions: putting a QR code beside the label of an exhibit in an art gallery or museum allows people to scan it and visit a webpage with more information. It’s also possible to have this for multiple languages.
  • Paying bills: in Switzerland, the new standard way of paying bills is to scan a QR code containing all payment details with a bank’s mobile app.
  • Contact details: A long time ago, I had an idea that people could easily exchange contact details by scanning a QR code. Someone has probably done this already. Similar applications I’ve seen in the wild are physical business cards with QR codes that lead to the company or owner’s website, and the LinkedIn app which displays a QR code leading to the user’s profile.
  • Queue ticket info: when you take a ticket and wait your turn in some customer service departments, the ticket contains a QR code that tells you how many people are ahead of you and how long you can expect to wait. You could potentially run a quick errand and keep track of your place while you’re away.
  • Two-Factor Authentication: QR codes are the easiest way to initially exchange the secret in Time-Based One-Time Passwords (TOTP), as shown in “Using Time-Based One-Time Passwords for Two-Factor Authentication“.

Conclusion

Nowadays, we talk a lot about artificial intelligence, augmented reality, robotics the internet of things, and so on. Many of these fields are awe-inspiring, yet not very accessible to the man in the street.

On the other hand, barcodes have provided a very simple way for software to interact with physical objects for decades. The advantages in efficiency are hard to dispute, as can be seen in the retail industry.

The ubiquity of smartphones and the more recent widespread use of QR codes has enabled the same concept to extend to a lot of other use cases, allowing us to interact with some very traditional settings in novel ways.