I want to come clean: I’ve spent most of my career to date manually testing the code I’ve written. This might surprise people who have known me or read my work for years, because I have spent the majority of my career extolling the benefits of CI pipelines and automated testing for embedded systems. But my view of testing was originally limited to on device tests. It’s only been since around 2016 that I have been thinking about how to develop as much of embedded software off-target as I possibly can. I started adding unit tests to our libc in 2017. I didn’t learn about TDD and testing without hardware until around 2018. When it comes to writing off-target tests, I am still learning.
While I now write tests for most new code, the majority of the code I’ve written is untested. Some of it is “trusted” because it has been used and tested (on-device) in multiple systems for years. But that only means so much: my confidence in the code only lasts as long as I’m not making any changes, and there are certainly problematic cases that I did not encounter in a particular system configuration.
My Policy on Bringing Untested Code Under Test
Going back and writing unit tests for all of your pre-existing code doesn’t make economic sense. I think you should focus the bulk of your efforts on writing tests for new development. You can bring existing code under test when it makes sense: whenever you’re going to change it.
Once you make changes, that “trusted” code is no longer trusted. You’ve introduced the opportunity for a new mistake to arise. This is an excellent reflection point. Before you make those changes, bring that code under test. This way you can be confident that your changes have the intended effect and that you haven’t introduced new errors.
It doesn’t have to be fully tested – I often advocate for using”big fat tests” that exercise existing code end-to-end. You can always refactor the module to improve its testability later. When you make a change to existing code, you just want to raise the bar and add a functionality test. At a minimum, make sure the test case covers the changes you are about to make!
A public example of this is an update I made to our Simple Fixed Point Conversion in C article. A commenter pointed out that we could support signed values by switching the fixed_point_t type from uint16_t to int16_t. The suggestion made perfect sense to me, but this is consistently one of most viewed articles (ranked #10 in 2021), and I didn’t want to publish an update without checking the change first.
At this point, I realized that I didn’t even keep the corresponding source code for the article handy – I had ripped it from a project I was working on and put the code straight into the article. There were no tests or examples written for it that I could piggy back on, so I took the time to create a library and set up the CMocka test infrastructure (easy to do since I’ve built a reusable build module for it). Once my library code compiled properly, I wrote some basic tests to exercise it in its current state:
static void double_to_fixed16_test(__attribute__((unused)) void** state)
{
fixed_point_t output_round = double_to_fixed_round(11.5);
fixed_point_t output_truncate = double_to_fixed_truncate(11.5);
assert_int_equal(0x170, output_truncate);
assert_int_equal(output_round, output_truncate);
output_round = double_to_fixed_round(128);
output_truncate = double_to_fixed_truncate(128);
assert_int_equal(0x1000, output_truncate);
assert_int_equal(output_round, output_truncate);
output_round = double_to_fixed_round(128.28);
output_truncate = double_to_fixed_truncate(128.28);
assert_int_equal(0x1009, output_round);
// Here, truncate loses precision vs round
assert_int_equal(0x1008, output_truncate);
}
static void fixed16_to_double_test(__attribute__((unused)) void** state)
{
double output = fixed_to_double(0x1000);
assert_float_equal(128.0, output, 0.01);
output = fixed_to_double(0x1009);
assert_float_equal(128.28125, output, 0.01);
output = fixed_to_double(0x1008);
assert_float_equal(128.25, output, 0.01);
// Confirm equivalency
output = fixed16_to_double(0x1008, 5);
assert_float_equal(128.25, output, 0.01);
// Check an alternate FP strategy - 10.6
output = fixed16_to_double(0x1008, 6);
assert_float_equal(64.125, output, 0.01);
}
Since these tests worked, I then changed the fixed_point_t definition:
/// Fixed-point Format: 11.5 (16-bit)
typedef int16_t fixed_point_t;
// If your numbers can only be positive, you can use unsigned to increase range
// typedef uint16_t fixed_point_t;
Re-running the tests, everything still passed – a great sign! Now all I needed was to add new tests with negative values:
static void double_to_fixed16_test(__attribute__((unused)) void** state)
{
// ... truncated
output_round = double_to_fixed_round(-128);
output_truncate = double_to_fixed_truncate(-128);
assert_int_equal((int16_t)0xf000, output_truncate);
assert_int_equal(output_round, output_truncate);
// ... truncated
output_round = double_to_fixed_round(-64.28);
output_truncate = double_to_fixed_truncate(-64.28);
assert_int_equal((int16_t)0xf7f7, output_round);
// Here, truncate loses precision vs round
assert_int_equal((int16_t)0xf7f8, output_truncate);
}
static void fixed16_to_double_test(__attribute__((unused)) void** state)
{
// ... truncated
output = fixed_to_double(0xf000);
assert_float_equal(-128.0, output, 0.01);
output = fixed_to_double(0xf7f7);
assert_float_equal(-64.28125, output, 0.01);
// ... truncated
}
This shows the expected results:
ninja -C buildresults embedded-resources-tests
ninja: Entering directory `buildresults'
[0/1] Running external command embedde...es-tests (wrapped by meson to set env)
[==========] Running 2 test(s).
[ RUN ] double_to_fixed16_test
[ OK ] double_to_fixed16_test
[ RUN ] fixed16_to_double_test
[ OK ] fixed16_to_double_test
[==========] 2 test(s) run.
[ PASSED ] 2 test(s).
Are these tests perfect and completely comprehensive? No. But in half an hour I built up a safety net by adding tests to previously untested code. This safety net allowed me to make the int16_t change with confidence. Even if it is still imperfect, my systems and my code are now in a better state. This is what matters most to me.
Source Code
You can find the source code and unit tests for this article in the embeddedartistry/embedded-resources GitHub repository.
References
- Course: Building Testable Embedded Systems
- Course: Building a Reusable Project Skeleton with Meson
- Course: Building a Reusable Project Skeleton with CMake
- embeddedartistry/embedded-resources
- What I Learned from James Grenning’s Remote TDD Course
- Field Atlas: Testing
- Embedded Systems Testing Resources
- Simple Fixed Point Conversion in C
- CMocka

Totally agree! I had read about TDD and unit testing for years but I never even went so far as to download or use a unit testing library until I started working through a book called “Algorithmic Thinking” by Daniel Zingaro. It uses a few coding websites as the basis for it’s problems and on these websites are hundreds or thousands of little coding problems with integrated unit tests. After a short description and an example input/output, you’re allowed to submit your code and see if you’ve passed all the tests. Most require the the solution not only be correct but also quick. I think it was the combination of (1) not worrying about cross-compiling or testing on hardware and (2) the problems having tricky solutions that I knew I wasn’t going to be able to test manually that made unit testing finally click. I was like, “Of COURSE I’m going to need some testing infrastructure, how else am I going to know if this thing works with all of my tricky code to make it faster and also that it runs within the allotted time?” If anyone else is looking for a launching point for unit testing, I’d recommend starting there; i.e. with small, self-contained problems. Other examples might be writing your own libraries for linked lists, fixed-point math, binary trees, circular buffer, stuff like that.
I started by using Unity, which I liked because it was 100% written in C (I felt like I could better understand how it works if I need to dive into the source code) and also because the developing company, Throw the Switch, wrote a few “get started” build documents so, for instance, I just needed to download their makefile, change the source file definitions, and PRESTO! I was unit testing. That also ended up getting in the way, though, since at some point I wanted to declare different test groups or something like that and I couldn’t figure out what the makefile was doing or how I could use Unity “from scratch”. That’s made me interested in the smaller, header only unit testing frameworks out there (such as munit, greatest, minunit, or Catch2). I know they won’t be as feature-rich as the more popular frameworks, but my thought is that if the whole thing is defined in a single header or header/source file, then I’ll be able to understand it more easily. Not knowing what’s going on in my build system when I say “Presto change-o, test this source code!” was also a bit of a barrier to entry for me, at first.
The effort is quite high for those who never/rarely write tests… it’s just a matter of getting used to it 🙂
I would probably split those test cases to keep one assertion per test, it could improve readability and the tests would have only one reason to fail
While I agree readability could be improved, I don’t think one assertion per function is the way to do it for these particular tests. Each test currently does have one reason to fail, at least conceptually: conversion in a given direction with a given fixed point size failed. I do think an argument could be made for splitting up the
_roundand_truncatefunctions into different test cases, and then perhaps an equivalency test really isn’t necessary.Were I to improve readability as the main metric, I would make a table of inputs/outputs and iterate over them in the given test, like is done in this test. However, it has one significant downside in CMocka: you get an assertion failure message, and a line number, but depending on the assertion type and failure value you have to perform a debug print to figure out exactly which case failed. Everything has tradeoffs!