A Look at My Portable Embedded Toolkit

Embedded systems developers rely on a variety of tools: debug adapters, power supplies, multimeters, oscilloscopes, logic analyzers, spectrum analyzers, and more.

Much of the equipment we use lives in our offices or labs, since it's too bulky to move around. But for engineers who travel frequently, it's quite helpful to have a portable toolkit. You never know when you'll be stuck in an emergency debugging situation, and having familiar tools on hand is a blessing.

If you're an engineer who travels frequently, or if you're just simply looking for useful tools, I hope you can find inspiration from my kit.

My Portable Embedded Toolkit

I've slowly built my portable embedded toolkit over the past ten years, and I've managed to pack a lot of debugging power into a small load. My kit is always on hand when I'm visiting a client, and its travelled with me to multiple manufacturing builds in China.

My kit consists of the following:

  • Digital multimeter
  • Aardvark I2C/SPI Host Adapter
  • Saleae Logic Analyzer
  • TIAO USB Multi-Protocol Adapter
  • USB Hub
  • A grab bag of wires and clamps
  • Spare jumpers

Most of the kit packs down into a first-edition Saleae Logic 8 case, which was made with a much sturdier shell. I carry the DMM and Aardvark adapter separately in my bag.

Let's take a deeper look at each piece of my kit and the roles they serve.

The major pieces of my embedded toolkit, packed for transport.

The unpacked contents of my Saleae case.


Digital multimeters (DMMs) are an essential tool for anyone working with electronics. I regularly need to measure voltage/current/resistance/capacitance and check continuity between signals.

My portable DMM of choice is the Mastech MS8288, which costs around 30 USD. I purchased my multimeter ten years ago and have yet to find a single cause for complaint.

For low-power tasks, the Mastech MS8288 performs admirably and produces accurate measurements. Once voltages and currents start to rise, you’ll notice inaccuracy (I've seen 3% error while measuring a 48V power supply). With that in mind, this isn't a DMM you'd use for tuning your power settings. For tasks which require precise measurements, you'll need to turn to a higher-precision DMM.

When selecting your own multimeter, make sure it has the following features:

  • Measurement Capabilities:

    • DC voltage 

    • AC voltage

    • Current

    • Resistance

    • Capacitance

  • Continuity check with audible beep

  • Selectable measurement range

  • Kickstand

  • Screen backlight

Everybody needs a multimeter, but you don’t need the most expensive one available.

Everybody needs a multimeter, but you don’t need the most expensive one available.

Aardvark I2C/SPI Host Adapter

The Aardvark I2C/SPI Host Adapter is the newest addition to my toolkit. The Aardvark has been tremendously helpful in tracking down I2C/SPI problems and validating I2C/SPI interfaces. The adapter can operate as both a master and slave, and you can script sequences of commands to send to the device.

Total Phase also supplies libraries that you can use to interact with the adapter programatically. I’ve written I2C and SPI drivers for the Aardvark adapter, which enables me to write device drivers from the comfort of my host machine. Once the drivers are working, I can quickly port them to the target platform.

The newest addition to the toolkit. Useful for debugging I2C/SPI problems and for writing drivers on your host machine.

The newest addition to the toolkit. Useful for debugging I2C/SPI problems and for writing drivers on your host machine.

Saleae Logic Analyzer

When I first started my career, logic analyzers were giant pieces of equipment which lived permanently in the lab. You would spend hours carefully getting set up and configuring the device, and you were chained to the analyzer until you were finished.

When Saleae released their amazingly compact USB logic analyzer, I immediately jumped on board. The Saleae Logic 8 is my favorite tool in my kit. Saleae’s logic analyzer software supports a variety of trigger conditions and data resolutions, and it can also decode a common communication protocols such as JTAG, SPI, I2C, CAN, and UART.

I'm still using my first edition Saleae Logic 8, but they’ve since overhauled their design and released both 4-channel and 16-channel versions.

I think that eight channels is the sweet spot for a portable analyzer. I’ve rarely needed to monitor more than eight channels at once, and in those rare cases I can usually work through signal group in stages. I also find that I regularly use more than four channels, especially when I need to analyze both control signals and a bus (e.g. SPI).

The  Saleae Logic 8  is my favorite tool in the toolkit.

The Saleae Logic 8 is my favorite tool in the toolkit.

TIAO USB Multi-Protocol Adapter

The TIAO USB Multi-Protocol Adapter (TUMPA) has been another invaluable tool in my kit.

TUMPA is built around FTDI’s FT2232H chip. Between OpenOCD and FTDI libraries, you can use the TUMPA as an adapter for SWD, JTAG, SPI, I2C, UART, and digital I/O. The board also sports on-board voltage translation, which can be enabled/disabled through software or with a jumper.

TUMPA allows me to use a signal debug adapter across most of my projects. If you work on a variety of projects, having a single debugging adapter can drastically simplify your development environment.

The  TUMPA  board enables me to carry a single debug adapter for a variety of scenarios.

The TUMPA board enables me to carry a single debug adapter for a variety of scenarios.


My laptop doesn't have enough ports to support all of my debugging devices, so I’m always carrying around a small USB hub.

I use Sabrent’s 4-port USB Hub without an external power supply, which I love for its small size and toggle buttons. If you’re working with high-current devices, I recommend purchasing the 4-port hub with a 5V power adapter.

You can use any USB hub you like, but I highly recommend picking one with toggle buttons. Being able to selectively enable and disable ports has been helpful when working with embedded devices. I frequently find myself cutting power to a USB device, using the buttons to reset devices, and to force USB disconnect/connect conditions.

All these USB devices mean that I need to carry a hub in my kit.

All these USB devices mean that I need to carry a hub in my kit.

Wire Grab-Bag

All of these debug tools need to be hooked up to the target system, so I keep a mixed bag of wires and clips in my kit. I have a mix of male-male, female-male, and female-female jumper wires to handle any manner of connector. I also keep a few pieces of scrap wire for emergency soldering needs.

The clips you see come with the Saleae logic analyzers, but they are just generally useful for clipping pins and boards. You can find all manner of useful clips by searching for “test probe hook clip”.

You can never have enough wires.

You can never have enough wires.

Spare Jumpers

Because I keep finding myself in situations where I don’t have enough jumpers, I decided to keep a little baggie of 2.54mm standard jumpers in my kit. These come in handy when you lose a jumper, or your local EE can’t seem to find enough for that new dev board.

There are never enough jumpers when you need them.

There are never enough jumpers when you need them.

What’s in your kit?

I’d love to hear from my readers about the tools you frequently carry around. Leave me a note in the comments!

GitNStats: A Git History Analyzer to Help Identify Code Hotspots

GitNStats is a cross-platform git history analyzer. GitNStats is used to identify files within a git repository which are frequently updated. High churn can be used as a proxy for identifying files which may have poor implementation quality, lack tests, or are missing a layer of abstraction.

Below I will provide basic instructions for getting and using GitNStats. We'll also look at two of my projects to review high-churn files and their git history. By reviewing the history of these files, we can identify potential problem areas, refactoring projects, and development process improvements.

Table of Contents:

  1. Getting GitNStats
  2. Usage
  3. Client Project Analysis
  4. Jenkins Pipeline Library Analysis
  5. Further Reading

Getting GitNStats

Best place to download the software is the repository Releases Page. Pre-packaged 64-bit releases are provided for OSX 10.12, Ubuntu 14.04, Ubuntu 16.04, and Windows.

To install GitNStats:

  1. Download one of the pre-packaged releases
  2. Create a home for GitNStats, such as within /usr/local/share or your home directory.
  3. Unzip the release package to the target directory
  4. Link the gitnstats binary to a location in your path, such as /usr/local/bin or /bin.
    1. Alternatively, you can add the target directory to your PATH variable

Example workflow included in the README:

# Download release (replace version and runtime accordingly)
cd ~/Downloads
wget <>

# Create directory to keep package
mkdir -p ~/bin/gitnstats

# unzip
unzip -d ~/bin/gitnstats

# Create symlink
ln -s /Users/rubberduck/bin/gitnstats/gitnstats /usr/local/bin/gitnstats


The primary method of using gitnstats is simply to run it in a repository without arguments. You will see the repository path, the branch, and a list of file & commit pairs.

$ gitnstats

Repository: /Users/pjohnston/src/ea/templates
Branch: master

Commits    Path
3    oss_docs/
3    oss_docs/
3    oss_docs/
3    oss_docs/
2    oss_docs/
1    Jenkinsfile
1    CI.jenkinsfile
1    .github/
1    .github/
1    oss_docs/
1    jenkins/Jenkinsfile
1    jenkins/CI.jenkinsfile

You can also supply the repository path as a command-line argument, allowing you to invoke gitnstats from outside of a repository:

~$ gitnstats /Users/pjohnston/src/ea/templates
Repository: /Users/pjohnston/src/ea/templates
Branch: master


You can specify a branch name to analyze using the -b or --branch arguments:

$ gitnstats -b avoid-failing-when-delete-a-branch
Repository: /Users/pjohnston/src/ea/scm-sync-configuration-plugin
Branch: avoid-failing-when-delete-a-branch


You can also limit the search to all commits after a certain date using the -d or --date arguments:

$ gitnstats -d 1/1/18
Repository: /Users/pjohnston/src/ea/embedded-framework
Branch: master

Commits    Path
8    docs/development/
5    docs/development/
4    docs/architecture/
3    docs/development/
2    docs/development/

Those are the basic operations supported by gitnstats, and they can be combined together:

$ gitnstats ~/src/ea/libc -b pj/stdlib-test -d 10/30/17
Repository: /Users/pjohnston/src/ea/libc
Branch: pj/stdlib-test

Commits    Path
1    src/stdlib/strtof.c
1    src/stdlib/strtod.c
1    src/gdtoa
1    premake5.lua
1    .gitmodules
1    src/stdlib/strtoll.c
1    src/stdlib/strtol.c

For further instruction, refer to gitnstats --help

Client Project Analysis

I recently worked on a short-term project for a client, so let's take a look at that project and see how the file churn maps to problems I encountered along the way.

10:38:13 (master) power-system-fw$ gitnstats
Repository: /Users/pjohnston/src/projects/power-system-fw
Branch: master

Commits    Path
34    src/lib/powerctrl/powerctrl.c
34    src/main.c
33    Makefile
26    src/lib/commctrl/commctrl.c
19    src/_config.h
18    src/drivers/i2c/i2c_slave.c
17    src/drivers/can/can.c
13    src/lib/powerctrl/powerctrl.h
13    src/drivers/bmr456/bmr456.c
11    src/drivers/gpio/gpio_interrupt_handler.c
11    src/lib/commctrl/commctrl.h
10    src/drivers/i2c/i2c.c

There are 8 files that have been changed a significant number of times, and the top 3 files were changed 3 times more than the files below the top 10.

That's a pretty huge gap, so let's look at the history to see what's going on with our top three files:

  • main.c was updated every time a new library or driver was added and required initialization.
    • The abort and error handling functions are included in main.c and received multiple functionality updates (stopping threads, sending a UART message, LED error code)
      • These handlers should be split into a different file
    • Static functions received doxygen updates in separate commits - I can clearly be better about documenting WHILE writing a function
  • powerctrl.c is the library which provides power control abstractions and power-state management
    • Timing parameters have been updated multiple times after validation efforts
      • These values should be configurable and moved into _config.h - churn should happen there
    • Due to timing problems, the library was overhauled to add in a thread which managed power state changes
      • Significantly less churn happens after this change
    • As new parts and drivers were brought up, they were added into the power control library individually
  • Makefile was updated every time a new source file was created.
    • Significant churn happened when bringing up the project on Linux, as differences between gcc versions and case-sensitive file systems identified a series of changes that needed to be made
      • These changes weren't made on a branch, but instead committed and tested with a new build on the build server.
      • This is terrible development practice on my end. I should have been testing locally in a VM or by using a branch.

By looking at the statistics, I can uncover some design work and refactoring efforts that will improve the project. I also see the results of some expedient choices I made, resulting in terrible development practices and unnecessary file churn. Now these facts are logged in git history forever.

What About Recent Changes?

The project was officially delivered on 6/1/18, so let's see what modifications have been made after client feedback:

$ gitnstats -d 6/2/18
Repository: /Users/pjohnston/src/projects/power-system-fw
Branch: master

Commits    Path
1    src/drivers/gpio/gpio_interrupt_handler.c
1    src/lib/powerctrl/powerctrl.c

Not too bad after all, though both gpio_interrupt_handler.c and powerctrl.c are in the high-commit list in the overall history analysis. If these libraries continue to show edits, I know I need to spend more time thinking about the structure and interfaces of these files.

Jenkins Pipeline Library Analysis

The Jenkins Pipeline Library is an open-source library for use by Jenkins multi-branch pipeline projects. I use this library internally to support complex Jenkins behaviors, as well as with some client Jenkins implementations.

Let's see what the highest-churn files for this project are:

10:41:59 (master) jenkins-pipeline-lib$ gitnstats
Repository: /Users/pjohnston/src/ea/jenkins-pipeline-lib
Branch: master

Commits    Path
15    vars/sendNotifications.groovy
11    vars/gitTagPreBuild.groovy
10    vars/slackNotify.groovy
5    vars/gitTagCleanup.groovy
4    vars/gitTagSuccess.groovy
4    vars/setGithubStatus.groovy
4    vars/emailNotify.groovy
4    vars/gitBranchName.groovy


Wow, the top three files have been edited more than 10 times.

Clearly there is a problem, which is made even worse by the fact that sendNotifications.groovy was split off into two separate functions: slackNotify.groovy and emailNotify.groovy. The fact that sendNotifications.groovy was managing two separate notification paths was cause for the initial churn on that file, and certainly caused overly complex logic. Splitting the file into two separate functions was A Good Thing.

Diving into the slackNotify.groovy changes, I can see that I was very thoughtless in my initial implementation and committing strategy.

Two commits were actual feature extensions:

  1. Add an option to use blueOcean URLs for slack notifications
  2. Improve output for builds with no changes or first-builds: The commit that was built will be indicated in the message

The rest of the changes were formatting errors, typos, and other fixes for easily-identified errors.

There are some clear lessons here:

  1. I can identify and address problematic files long before 25 total changes (sendNotifications.groovy + slackNotify.groovy)
  2. To avoid high-churn on a file, follow good development processes. Expediency creates terrible history and higher-than-necessary churn. I would be embarrassed to do this on a professional project, so why did I take the expedient route on a personal (and public!) project?

Further Reading

Documenting Architectural Decisions Within Our Repositories

I recently discovered Michael Nygard's article on the subject of Documenting Architecture Decisions. I immediately became interested in using Architecture Decision Records (ADRs) with my projects.

I will provide a brief ADR summary, but I recommend reading Michael Nygard's article before continuing.

Table of Contents:

  1. An Overview of Architecture Decision Records
  2. Using ADRs in Your Projects
    1. Installation
    2. Initialization
    3. Creating a New ADR
    4. Linking ADRs
    5. Superseding ADRs
    6. Other adr-tools Tricks
      1. Listing ADRs
      2. Generating Summary Documentation
      3. Upgrading the ADR Document Format
  3. Putting it All Together
  4. Further Reading

An Overview of Architecture Decision Records

The motivation for using ADRs comes from a common scenario that all developers become familiar with:

One of the hardest things to track during the life of a project is the motivation behind certain decisions. A new person coming on to a project may be perplexed, baffled, delighted, or infuriated by some past decision. Without understanding the rationale or consequences, this person has only two choices:

1. Blindly accept the decision
2. Blindly change it.

Instead of leaving developers to operate blindly, we should record significant decisions affecting the structure, dependencies, interfaces, techniques, or other aspects of our code.

Rather than maintain a large document which nobody will read, we'll house these decisions within our repositories so they are easily accessible.

The ADR format summarizes decisions in five parts:

  1. Title
  2. Context
  3. Decision
  4. Status (e.g. proposed, accepted, deprecated, superseded)
  5. Consequences (good, bad, neutral)

ADR records should be kept short (maximum of two pages) so they are easily digestible by developers.

One ADR will document one significant decision. If a decision is reversed, amended, deprecated, or clarified, we'll keep the corresponding ADR. We'll generate a new ADR, link the related decisions together, and mark the previous decision with a relevant status note.

By keeping a full history of decisions, we help developers see the evolution of our decisions through time and provide the full context for each decision.

Now that we have a basic understanding of what an ADR is, let's see how we can use them in our projects.

Using ADRs in Your Projects

The free adr-tools project allows you to create and manage architecture decisions directly within your projects. No need to worry about managing yet-another-document in some-other-place-we-can't-remember.

ADRs are numbered in a sequential and monotonic manner (0001, 0002, 0003, …). The records are created as Markdown files so they can be parsed by GitHub and documentation tools.


adr-tools can be installed by adding the git project or a packaged release to your PATH.

Alternatively, OS X users can install adr-tools with Homebrew:

brew install adr-tools


Once adr-tools is installed, you will need to enable support inside of your repository using the adr init command. The command takes an argument which specifies where the ADRs should live. For example:

adr init doc/architecture/decisions

The adr init command will create the first ADR in your repository, which notes that you have decided to record architecture decisions:

# 1. Record architecture decisions

Date: 2018-03-20

## Status


## Context

We need to record the architectural decisions made on this project.

## Decision

We will use Architecture Decision Records, as described by Michael Nygard in this article:

## Consequences

See Michael Nygard's article, linked above. For a lightweight ADR toolset, see Nat Pryce's _adr-tools_ at

Creating a New ADR

To create a new ADR, use the adr new command:

adr new Title For My Decision

This will create a new decision record in the form of

If the VISUAL or EDITOR environment variables are set, the editor will automatically open the file. Otherwise you will need to manually open the file for editing.

Linking ADRs

You can link two ADRs together using the adr link command:


The SOURCE and TARGET arguments are references to an ADR, which can be either a number or partial filename. The LINK argument is a description that will be added the SOURCE ADR, and the REVERSE-LINK option is a description that will be added to the TARGET ADR.

For example, here is a link which indicates ADR 12 amends ADR 10:

adr link 12 Amends 10 "Amended by"

You can also link ADRs when creating a new one using the -l argument:


Similarly to the arguments for the adr link command, TARGET references the ADR which we are linking to, LINK is the description that will be added to our new ADR, and REVERSE-LINK is the description which will be added to the TARGET ADR.

To use our amendment example above:

adr new -l "12:Amends:Amended by" Brand New Decision

You can provide multiple -l options when creating a new ADR to enable linking to multiple existing records.

Superseding ADRs

When creating a new ADR, you can indicate that it supersedes an existing adr using the -s argument:

adr new -s 12 Brand New Decision

The status of the superseded ADR (0012 in the example above) will be updated to indicate that it superseded by the new ADR. The newly created ADR will also have a status which indicates the ADR that it is superseding.

You can provide multiple -s options when creating a new ADR.

Other adr-tools Tricks

While creating, linking, and superseding ADRs is primarily how we will interact with adr-tools, other options are available.

Listing ADRs

The adr list command will provide a list of all ADRs in your project:

$ adr list

Generating Summary Documentation

The adr generate command can be used to generate summary documentation. Two options are currently provided: toc and graph.

The toc argument will generate a Markdown-format table of contents:

$ adr generate toc
# Architecture Decision Records

* [1. Record architecture decisions](
* [2. Remove simulator from project](
* [3. Meson Build System](
* [4. Link With --whole-archive](

The graph argument will generate a visualisation of the links between decision records in Graphviz format. Each node in the graph represents a decision record and is linked to the decision record document.

$ adr generate graph
digraph {
  node [shape=plaintext];
  _1 [label="1. Record architecture decisions"; URL="0001-record-architecture-decisions.html"]
  _2 [label="2. Remove simulator from project"; URL="0002-remove-simulator-from-project.html"]
  _1 -> _2 [style="dotted"];
  _3 [label="3. Meson Build System"; URL="0003-meson-build-system.html"]
  _2 -> _3 [style="dotted"];
  _4 [label="4. Link With --whole-archive"; URL="0004-link-with-whole-archive.html"]
  _3 -> _4 [style="dotted"];

The link extension can be overridden with the -e argument. For example, to generate a PDF visualization which links to ADRs with PDF extensions:

adr generate graph -e .pdf | dot -Tpdf > graph.pdf

Upgrading the ADR Document Format

If the ADR format changes in a future adr-tools version, you can upgrade to the latest document format using the adr upgrade-repository command.

Putting it All Together

If you're curious about what ADRs look like in practice, I recommend reviewing the adr-tools decision records.

After trying out adr-tools and documenting my architecture decisions, I'm hooked. As a consultant, I frequently work on a variety of projects and am frustrated by the lack of documentation. I hope to leave other developers with the context for my decisions and prevent that frustration from spreading.

I encourage you to give ADRs a try. Keeping a list of running decisions in a simple and digestible manner is much easier than maintaining large specification documents.

Further Reading