NamedType: The Easy Way to Use Strong Types in C++

NamedType is a library written by Jonathan Boccara, a C++ developer and author of the FluentC++ blog. The NamedType library provides a simple interface for using strong types, which are not natively supported in C++.

We often utilize native types, such as int or double, to represent the values in our software. The general nature of these types means that we can easily make mistakes in our code. Consider a simple rectangle constructor:

Rectangle(double width, double height);

If you swapped the width or height in your code, the compiler would never be able to tell you.

double width = 10.5;
double height = 3.0;
Rectangle r(height, width); // wrong but compiles!

By using strong types in our interfaces, we can make our APIs more explicit and rely on the compiler to catch any mistakes:

// Create our strong types
using Width = NamedType<double, struct WidthTag>;
using Height = NamedType<double, struct HeightTag>;
//…
Rectangle(Width width, Height height); // new constructor
//…
Rectangle r(Height(3.0), Width(10.5)); // compiler error - type mismatch!
Rectangle r2(Width(10.5), Height(3.0)); // ok

The great news is that strong types are zero-cost when compiling with -O2 (or -O1 with GCC). It's time to make your APIs more explicit and catch errors during compilation.

Furher Reading

For more on NamedType and strong types in C++:

Documenting Architectural Decisions Within Our Repositories

I recently discovered Michael Nygard's article on the subject of Documenting Architecture Decisions. I immediately became interested in using Architecture Decision Records (ADRs) with my projects.

I will provide a brief ADR summary, but I recommend reading Michael Nygard's article before continuing.

Table of Contents:

  1. An Overview of Architecture Decision Records
  2. Using ADRs in Your Projects
    1. Installation
    2. Initialization
    3. Creating a New ADR
    4. Linking ADRs
    5. Superseding ADRs
    6. Other adr-tools Tricks
      1. Listing ADRs
      2. Generating Summary Documentation
      3. Upgrading the ADR Document Format
  3. Putting it All Together
  4. Further Reading

An Overview of Architecture Decision Records

The motivation for using ADRs comes from a common scenario that all developers become familiar with:

One of the hardest things to track during the life of a project is the motivation behind certain decisions. A new person coming on to a project may be perplexed, baffled, delighted, or infuriated by some past decision. Without understanding the rationale or consequences, this person has only two choices:

1. Blindly accept the decision
2. Blindly change it.

Instead of leaving developers to operate blindly, we should record significant decisions affecting the structure, dependencies, interfaces, techniques, or other aspects of our code.

Rather than maintain a large document which nobody will read, we'll house these decisions within our repositories so they are easily accessible.

The ADR format summarizes decisions in five parts:

  1. Title
  2. Context
  3. Decision
  4. Status (e.g. proposed, accepted, deprecated, superseded)
  5. Consequences (good, bad, neutral)

ADR records should be kept short (maximum of two pages) so they are easily digestible by developers.

One ADR will document one significant decision. If a decision is reversed, amended, deprecated, or clarified, we'll keep the corresponding ADR. We'll generate a new ADR, link the related decisions together, and mark the previous decision with a relevant status note.

By keeping a full history of decisions, we help developers see the evolution of our decisions through time and provide the full context for each decision.

Now that we have a basic understanding of what an ADR is, let's see how we can use them in our projects.

Using ADRs in Your Projects

The free adr-tools project allows you to create and manage architecture decisions directly within your projects. No need to worry about managing yet-another-document in some-other-place-we-can't-remember.

ADRs are numbered in a sequential and monotonic manner (0001, 0002, 0003, …). The records are created as Markdown files so they can be parsed by GitHub and documentation tools.

Installation

adr-tools can be installed by adding the git project or a packaged release to your PATH.

Alternatively, OS X users can install adr-tools with Homebrew:

brew install adr-tools

Initialization

Once adr-tools is installed, you will need to enable support inside of your repository using the adr init command. The command takes an argument which specifies where the ADRs should live. For example:

adr init doc/architecture/decisions

The adr init command will create the first ADR in your repository, which notes that you have decided to record architecture decisions: 0001-record-architecture-decisions.md.

# 1. Record architecture decisions

Date: 2018-03-20

## Status

Accepted

## Context

We need to record the architectural decisions made on this project.

## Decision

We will use Architecture Decision Records, as described by Michael Nygard in this article: http://thinkrelevance.com/blog/2011/11/15/documenting-architecture-decisions

## Consequences

See Michael Nygard's article, linked above. For a lightweight ADR toolset, see Nat Pryce's _adr-tools_ at https://github.com/npryce/adr-tools.

Creating a New ADR

To create a new ADR, use the adr new command:

adr new Title For My Decision

This will create a new decision record in the form of XXX-title-for-my-decision.md.

If the VISUAL or EDITOR environment variables are set, the editor will automatically open the file. Otherwise you will need to manually open the file for editing.

Linking ADRs

You can link two ADRs together using the adr link command:

adr link SOURCE LINK TARGET REVERSE-LINK

The SOURCE and TARGET arguments are references to an ADR, which can be either a number or partial filename. The LINK argument is a description that will be added the SOURCE ADR, and the REVERSE-LINK option is a description that will be added to the TARGET ADR.

For example, here is a link which indicates ADR 12 amends ADR 10:

adr link 12 Amends 10 "Amended by"

You can also link ADRs when creating a new one using the -l argument:

-l TARGET:LINK:REVERSE-LINK

Similarly to the arguments for the adr link command, TARGET references the ADR which we are linking to, LINK is the description that will be added to our new ADR, and REVERSE-LINK is the description which will be added to the TARGET ADR.

To use our amendment example above:

adr new -l "12:Amends:Amended by" Brand New Decision

You can provide multiple -l options when creating a new ADR to enable linking to multiple existing records.

Superseding ADRs

When creating a new ADR, you can indicate that it supersedes an existing adr using the -s argument:

adr new -s 12 Brand New Decision

The status of the superseded ADR (0012 in the example above) will be updated to indicate that it superseded by the new ADR. The newly created ADR will also have a status which indicates the ADR that it is superseding.

You can provide multiple -s options when creating a new ADR.

Other adr-tools Tricks

While creating, linking, and superseding ADRs is primarily how we will interact with adr-tools, other options are available.

Listing ADRs

The adr list command will provide a list of all ADRs in your project:

$ adr list
docs/architecture/decisions/0001-record-architecture-decisions.md
docs/architecture/decisions/0002-remove-simulator-from-project.md
docs/architecture/decisions/0003-meson-build-system.md
docs/architecture/decisions/0004-link-with-whole-archive.md

Generating Summary Documentation

The adr generate command can be used to generate summary documentation. Two options are currently provided: toc and graph.

The toc argument will generate a Markdown-format table of contents:

$ adr generate toc
# Architecture Decision Records

* [1. Record architecture decisions](0001-record-architecture-decisions.md)
* [2. Remove simulator from project](0002-remove-simulator-from-project.md)
* [3. Meson Build System](0003-meson-build-system.md)
* [4. Link With --whole-archive](0004-link-with-whole-archive.md)

The graph argument will generate a visualisation of the links between decision records in Graphviz format. Each node in the graph represents a decision record and is linked to the decision record document.

$ adr generate graph
digraph {
  node [shape=plaintext];
  _1 [label="1. Record architecture decisions"; URL="0001-record-architecture-decisions.html"]
  _2 [label="2. Remove simulator from project"; URL="0002-remove-simulator-from-project.html"]
  _1 -> _2 [style="dotted"];
  _3 [label="3. Meson Build System"; URL="0003-meson-build-system.html"]
  _2 -> _3 [style="dotted"];
  _4 [label="4. Link With --whole-archive"; URL="0004-link-with-whole-archive.html"]
  _3 -> _4 [style="dotted"];
}

The link extension can be overridden with the -e argument. For example, to generate a PDF visualization which links to ADRs with PDF extensions:

adr generate graph -e .pdf | dot -Tpdf > graph.pdf

Upgrading the ADR Document Format

If the ADR format changes in a future adr-tools version, you can upgrade to the latest document format using the adr upgrade-repository command.

Putting it All Together

If you're curious about what ADRs look like in practice, I recommend reviewing the adr-tools decision records.

After trying out adr-tools and documenting my architecture decisions, I'm hooked. As a consultant, I frequently work on a variety of projects and am frustrated by the lack of documentation. I hope to leave other developers with the context for my decisions and prevent that frustration from spreading.

I encourage you to give ADRs a try. Keeping a list of running decisions in a simple and digestible manner is much easier than maintaining large specification documents.

Further Reading

Improving Our Software With 5 Lightweight Processes You Can Adopt This Month

This article comes from the March 2018 edition of our embedded systems newsletter.

I typically use the newsletter as a way to share interesting embedded news, summaries of new parts, libraries I've been investigating, and other miscellanea. However, I thought that the topic I discussed in this newsletter was worth sharing with a wider audience.

It is my sincere hope that your teams adopts at least one new process to help improve code quality.

Improving Our Software With 5 Lightweight Processes You Can Adopt This Month

It's that time of year when the Barr Group releases their yearly Embedded Systems Safety & Security Survey results. Last year's results were eye opening, as nearly 50% of respondents reported not using static analysis and 36% reported that they do not perform code reviews. The 2018 results were no better:

  • 38% of safety-critical products don't comply with a formal safety standard
  • 43% of teams working on safety critical products don't perform code reviews
  • 41% of of teams working on safety critical devices don't perform regression testing (54% for IoT product teams)
  • 33% of teams working on safety-critical products don't perform static analysis (49% for IoT product teams)

These numbers are even more alarming given the fact that 25% of the reported "internet-connected devices" could kill or injure people if hacked. 22% of respondents mentioned that security for connected devices wasn't even on their to-do list. We're becoming increasingly connected, but our standards for safety, testing, and verification are not keeping pace. I want to be clear: this is not acceptable.

Many teams skip crucial development processes and justify them with scheduling pressure or by blaming the boss. There will always be scheduling pressure, so we must adjust our approaches or we will continue to flounder. Bugs are expensive in time, money, and morale. Debugging accounts for 50% of most project schedules. We all make mistakes. Anything we can do to keep bugs out of our code or catch them as early as possible will save money and time.

I've selected five simple processes to improve quality that your team can adopt over the next month. These processes are cheap or free, apply across languages and platforms, and best of all - they work. While each process requires a bit of time to get up and running, there is little-to-no maintenance involved in continuing to use them.

Here are five lightweight processes for improving code quality and identifying problems early:

  1. Fix all of your warnings
  2. Set up a static analysis tool for your project
  3. Measure and tackle complexity in your software
  4. Create automated code formatting rules
  5. Have your code reviewed

Fix All of Your Warnings

The first thing I do when working on a new project is fix all of the compiler warnings. It's amazing to me how developers will ignore warnings or rationalize their presence. Occasionally you even run into a team who will fight tooth and nail to prevent you from fixing them!

The compiler knows the programming language better than you ever will. You should not ignore the compiler when it is alerting you to an issue. The way you are using the language is dangerous and likely has unintentional side effects. Depending on the warning, you might be introducing undefined behavior into your software. That's not our idea of quality software.

If you have warnings in your code base, fixing them is one of the fastest ways to improve quality. You will fix bugs and flaws in your program, regardless of whether or not they are currently problematic.

For more on compiler warnings:

Set Up Static Analysis Support

Static analysis tools provide us with even better feedback than the compiler. Your compiler will happily allow some problematic cases which are legal in language, such as out-of-bounds pointer accesses or missing initialization values. Your analyzer will catch these problems, and also report red flags such as unused or redundant code. When used throughout the development cycle, your static analysis tool can help you catch & prevent latent problems even earlier than your testing cycle.

Some governmental and industrial organizations are starting to require static analysis data for certification processes. Companies such as PRQA provide tools that can check for compliance with safety critical standards in a variety of industries.

There are many free static analysis tools available and their commercial counterparts are also inexpensive (most are less than $1000). At Embedded Artistry, we use Clang's static analyzer alongside clang-tidy.

Here are some resources you can use to find a static analysis tool that fits your needs:

Measure and Tackle Complexity

By the time you've eliminated warnings on your project and cleaned up glaring problems exposed by your static analysis tool, you've already made significant progress with software quality. The next goal is to measure complexity in your software. Because highly complex functions tend to be hard to understand, test, and maintain, these functions are prime candidates for refactoring and simplification.

By using a metric to measure complexity, we have a quantitative way to evaluate our code and identify pieces that need special attention. We can see how our changes are impacting the code base over time and trigger automatic alerts and reviews whenever a threshold is exceeded. We can focus our code reviews on functions with high complexity scores, making sure they get the most of our limited attention. Metrics aren't perfect, but they increase our insight into our software quality.

These are the simplest and most popular metrics for measuring code complexity:

  • Lines of code (LOC): a count of the non-blank, non-comment source lines in a function, module, or project
  • McCabe cyclomatic complexity (MCC): provides a complexity score based on the number of branches (e.g. conditional statements)
  • Strict cyclomatic complexity (SCC, CC2): expands MCC by considering the number of conditions within each branch, which provides an approximation for the number of test cases needed for full coverage

These free tools will calculate complexity metrics for C/C++. We currently use Lizard at Embedded Artistry.

For more on software complexity:

Create Auto-formatting Rules

Automated code formatting might seem like a strange recommendation to put into the top five, but it serves three purposes:

  1. Automated formatting reduces a programmer's cognitive load by eliminating an entire category of details and decisions they need to keep in mind
  2. Automated formatting improves the quality of our peer code reviews (the next recommendation) by eliminating arguments about style
  3. Automated formatting is the first step toward implementing and enforcing a coding standard

Every team that I've encountered with a written style guide inevitably ignores those guidelines, and multiple programming styles run rampant. Instead of relying on developers to constantly keep an arbitrary set of rules in mind, we can automate the process to make it simple and impersonal. At the very least, it's worth eliminating the pointless, time-wasting arguments that cause friction within our teams.

We use clang-format on Embedded Artistry projects. Uncrustify and Astyle are other popular code formatting tools.

For more on automated code formatting:

Have Your Code Reviewed

Writers accept the fact that first drafts are generally garbage and need heavy editing. Before I send this newsletter out into the world, it has usually gone through 2-3 self-editing sessions and 1-2 peer reviews. Along the way, the newsletter is trimmed and restructured. The result is a much better product than the initial draft.

Yet, for some reason, programmers seem to think that perfect code is produced on the first try. The 2018 Barr Group survey results showed that 54% of IoT product teams don't perform regular code reviews. The survey results also show that a painfully scary 43% of teams working on safety-critical software don't perform regular reviews.

Perfect code on the first try might be possible if you're a prodigious programmer. But remember: even Hemingway had an editor. A second set of eyes can identify flaws that you missed in your first pass. Another developer may have different experiences that provide insight into the merits or risks of your approach. The architect on your team probably has input on how a module should interface with other pieces of the program. The "ego effect" also comes into play: knowing that our code will be presented and reviewed by another human can dramatically improve the overall quality. We will spend time cleaning up and checking the logic before putting code up for judgment.

Code reviews can waste time and become unproductive if poorly implemented. Best Practices for Peer Code Review provides some excellent tips for getting started. Notably, a lightweight review process is more efficient and practical than long, in-depth reviews with multiple developers. Even performing reviews on only 20-33% of the submissions provides benefits due to the "ego effect". While 20% may seem low, remember that we are aiming for achievable: reviewing 20% of the source code is definitely better than none.

I highly recommend implementing peer code reviews after setting up automated code-formatting. This helps constrain code review discussions by preventing them from devolving into style nit-picking. If you've set up static analysis, make sure the tools are used prior to code reviews.

For more information on code reviews:

The Keys to Success When Adopting a New Processes

When adopting new processes, it's important to focus on one at a time. Adopting new processes in stages ensures that you have time to correctly implement each new technique before moving on to the next one. By implementing too many changes at once, you are likely to overwhelm your team and evoke a mutiny.

If you're leading a team, it helps to find someone who is excited and can help you champion the idea. Empower that person so they can demonstrate the benefits of the new process to your team. Back them up when there is pushback. Change is always hard. Expect the resistance, but don't let it stop you.

The key to making new processes stick is to make them as automated as possible. There is never a case where it isn't worth the time it takes to automate a development process. Automation ensures that the process is easy to follow and always happens, rather than trusting individual contributors to remember to follow a process. Automation also makes the process less personal. The rules are clearly defined and are being enforced by a tool. Depersonalization helps us view the situation dispassionately, rather than as an attack on our abilities.

To recap, when you implement new processes:

  1. Adopt one new process at a time
  2. Empower a process champion on your team
  3. Automate!

Newsletter

If you liked this article, sign up for our monthly newsletter to receive interesting embedded tidbits, articles I've enjoyed reading, summaries of interesting new parts, companies looking for engineers, and other miscellanea.