Unit testing

Unit Testing and Reporting on a Build Server using Ceedling and Unity

Updated: 26 April 2019

Today we have another guest post by Paul Shepherd, this time covering integration of Ceedling and Unity with the build status reporting.

Paul is the Lead Electrical Engineer at Benchmark Space Systems. He has spent time in the Aerospace, Consumer Audio, and Integrated Circuits fields, more often than not working at the intersection of hardware and software. You can contact him via LinkedIn.


In the last post, we shared a method for implementing custom build steps in the Eclipse IDE. We used this method at Benchmark Space Systems to add a firmware version to our project

Unit testing is another best practice that we have embraced, and we are working to cover as much of our embedded code as possible. We chose Ceedling+Unity for our unit testing framework, in part because of its strong integration with the Eclipse IDE. Ceedling works well on our developer workstations in Eclipse and command prompts, and it was also straight forward to get running on our build server (Jenkins running on Ubuntu). This post focuses on the less straightforward step of capturing the unit testing results and reporting them to Jenkins.

If you are new to Ceedling (or even unit testing in general, like I am) I recommend Matt Chernosky’s eBook A Field Manual for Ceedling. His field manual enabled us to quickly understand and start using these testing tools. I’m also reading James Grenning’s book Test Driven Development for Embedded C.

Ceedling is still pre-1.0 as of February 2019. While it is quite capable, there are some areas lacking documentation, which required us to tinker under the hood to complete our integration. 

An important caveat: this blog post reflects Ceedling 0.28.3, released on 8th August, 2018. A PR has been submitted to add coverage reporting via xml output, but this blog shows how to hack the 0.28.3 branch to get this working. The blog post will be updated when that PR has been merged to master.

The content in this blog post was developed based on the following software versions:

  • Ceedling: 0.28.3

  • CException: 1.3.1.18

  • CMock: 2.4.6.217

  • Unity: 2.4.3.122

Running a Ceedling test on the local development workstation.

Running a Ceedling test on the local development workstation.

Running and Exporting Test Results from Ceedling

The documentation and how-to articles available for Ceedling do an excellent job of getting you to the point of running tests. There are a few additional steps needed to collect and post results during the Jenkins pipeline build process.

First, the Jenkinsfile was updated to run the test as a new build stage:

stage('Unit testing')
{
     steps
     {
          sh 'ceedling’
     }
}

Running this shell command is sufficient to report overall unit testing status because Ceedling returns an exit code based on the test results. However, if a test fails, you must manually hunt through the build log to determine the cause. Jenkins has a nice interface for reporting test results and highlighting test failures, but an XML with the test result data must be captured during the build process.

In order to post the Ceedling test results in XML format, a new line must be added into the Ceedling’s project-specific configuration file, project.yml:

:plugins:
  :enabled:
    - xml_tests_report # <--- this line has been added
    - stdout_gtestlike_tests_report

Once the xml_test_reports argument is added to the plugins section of the configuration file, a report.xml file will be generated in the ($BUILD_DIR)/artifacts/test/ directory.

In order to parse the test results, you will need to install the xUnit plugin. A custom XML formatting style sheet is also required. We use the Jenkins-unity-xml-formatter.

The unity.xsl file can be placed anywhere in the project directory tree. The xUnit command in the Jenkinsfile must reference this file relative to the project root directory ($(PROJECT_DIR)).

We then add a post step in the Unit Testing Pipeline stage to capture these results:

stage('Unit testing')
{
     steps
     {
          sh 'ceedling’
     }
     post
     {
          always
          {
                xunit tools: [Custom(customXSL: 'unity.xsl', pattern: 'build/artifacts/gcov/report.xml', skipNoTestFiles: false, stopProcessingIfError: true)]
          }
     }
}

Generating a Code Coverage Report

Several steps are necessary to generate and post the test coverage data. The gcov plugin must be enabled in the project.yml file to generate code coverage data:

:plugins:
  :enabled:
    - gcov # <--- this line has been added
    - xml_test_reports        
    - stdout_gtestlike_tests_report

Once the gcov plugin has been enabled, it can be called at the command line by appending gcov:all to the ceedling command.

Code coverage info appended to code test results.

Code coverage info appended to code test results.

Unfortunately, this doesn’t actually generate the test report file. Ceedling implements the gcov functionality internally, but to create a report from this data, the Gcvor tool must be installed.

Once gcovr is installed, we add another line specifying the reporting type to the project.yml file:

:gcov:
 :html_report_type:  detailed

Note that the :gcov: section should be defined at the top level. It is not a subsection of anything else in the project.yml file.

Now that gcov and reporting are enabled in the project.yml file, we can generate a coverage report by adding an additional parameter to the Ceedling command line invocation.

$ ceedling gcov:all utils:gcov

Although this looks a bit repetitive, both parameters are necessary: gcov:all runs the test coverage analysis, and utils:gcov calls Gcovr to generate a report in HTML format.

Ceedling’s gcov plugin will only generate an html report unless we hack the internal plugin configuration. In order to use Gcovr to generate a Cobertura-style xml report, two files must be edited.

To add XML report generation, open the file ($PROJECT_DIR)/vendor/ceedling/plugins/gcov/config/defaults.yml. In the gcov_post_report_advanced section, the --xml argument must be added to the gcovr command, and the --html and --html-reports arguments must be removed.

Modifications to the defaults.yml file to enable XML report generation.

Modifications to the defaults.yml file to enable XML report generation.

Next, open file the file ($PROJECT_DIR)/vendor/ceedling/plugins/gcov/lib/gcov_constants.rb. Update the GCOV_ARTIFACTS_FILE variable to have a file extension of .xml instead of .html.

Modifications to the gcov_constants.rb file to enable XML report generation.

Modifications to the gcov_constants.rb file to enable XML report generation.

These edits are superseded by a Pull Request in the Ceedling repo, but will be necessary until the PR is merged into master.

Parsing the code coverage report

Gcovr outputs a Cobertura-compliant xml report which Jenkins can parse with the Cobertura plugin

Our unit testing pipeline step is updated to use the new Ceedling invocation and to capture the code coverage results:

stage('Unit testing')
{
  steps
  {
    sh 'ceedling gcov:all utils:gcov'
  }
  post
  {
    always
    {
        xunit tools: [Custom(customXSL: 'unity.xsl', pattern: 'build/artifacts/gcov/report.xml', skipNoTestFiles: false, stopProcessingIfError: true)]

        cobertura coberturaReportFile: 'build/artifacts/gcov/GcovCoverageResults.xml'
    }
  }
}

There are many arguments that you can add to your xUnit and Cobertura pipeline steps in order to set healthy/unhealthy boundaries. Cobertura naturally uses % of total for its metrics, but for xUnit, you must specify ‘thresholdMode:2’ in order for the tool to work in % of tests instead of absolute numbers. For unit testing, I feel that relative measures, rather than absolute measures, are a much better view of the overall health of your codebase.

Finally, we see the test results and code coverage reports summarized on the build status page.

Detailed report outputs are available as links from the individual build page.

Detailed report outputs are available as links from the individual build page.

Our experience has been that bringing these metrics to our build status page keeps us motivated and simplifies communicating our work status to our stakeholders. I hope that the information we shared here is useful to you in improving your own continuous integration process. I’d also like to thank the team at Embedded Artistry for allowing me to share these tips on their blog.

Further Reading

Related Articles

Related Books

Change Log

  • 20190426

    • Fixed typo: xml_test_reports should be xml_tests_report

    • Thanks Roelof Pantjes!

Embedded Systems Testing Resources

Updated: 20190302

Embedded Artistry was founded with the goal of creating reliable, safe, and well-tested embedded systems. The sad fact is that most embedded software that we've encountered is low-quality and untested. Perhaps this holds true for most of the software industry - the continual procession of hacks, flaws, and errors is discouraging.

We are focused on ways that we can all improve at our craft. Testing our systems, especially in an automated way, is a direct approach at identifying bugs early and improving our software quality.

We decided to make our list of testing resources publicly available. If you have any favorites which aren't listed here, leave us a comment!

Table of Contents:

  1. Unit testing
    1. James Grenning and TDD
    2. Matt Chernosky and Electron Vector
    3. Throw the Switch
    4. Unit Testing Frameworks
    5. Other Resources
  2. Debugger-based testing using metal.test
  3. Phil Koopman's lectures on embedded software quality and testing

Unit Testing

Most developers know they should be writing unit tests as they develop new features. If you're unfamiliar with the concept, unit testing focuses on testing the finest granularity of our software, the functions and modules, in an independent and automated manner. We want to ensure that the small pieces operate correctly before we combine them into larger cooperating modules. By testing at the finest granularity, we reduce the total number of test combinations needed to cover all possible logic states. By testing in an automated manner, we can ensure that any changes we make don't introduce unintended errors.

"A key advantage of well tested code is the ability to perform random acts of kindness to it. Tending to your code like a garden. Small improvements add up and compound. Without tests, it's hard to be confident in even seemingly inconsequential changes."
-Antonio Cangiano (@acangiano)

Unfortunately, at Embedded Artistry we’ve only worked on a handful of projects that perform unit testing. The primary reason for this is that many developers simply don’t know where to start when it comes to writing unit tests for embedded systems. The task feels so daunting and the schedule pressures are so strong that they tend to avoid unit testing all together.

James Grenning and TDD

James Grenning has put a tremendous amount of effort into teaching embedded systems developers how to adopt TDD. He published an embedded systems classic, Test-Driven Development for Embedded C and conducts TDD training seminars.

James has written extensively about TDD on his blog. Here are some of our favorite posts:

You can also watch these talks for an introduction into the how and why of TDD:

I took James Grenning's Remote TDD training webinar and wrote about my experience on the blog.

If you're interested in taking a training class, you can learn more on James's website:

Matt Chernosky and Electron Vector

Matt Chernosky, who runs the Electron Vector blog, is an invaluable resource for embedded unit testing and TDD. If you are having a hard time getting started with unit testing and TDD, Matt's articles provide a straightforward and accessible approach.

Here are some of our favorite articles from Matt's blog:

Matt published a free guide for using Ceedling for TDD. If you have experience with TDD but are looking to improve your Ceedling skills, check out his Ceedling Field Manual

Throw the Switch

Throw the Switch created the Ceedling, Unity, and Cmock unit testing suite. This is the trio of tools that Matt Chernosky uses and writes about.

Aside from creating testing tools, Throw the Switch maintains a library of test-related articles and a course called Unit Testing & Other Embedded Software Catalysts. Additional courses related to unit testing for embedded systems are being developed.

Unit Testing Frameworks

Listed below are frequently recommended unit testing frameworks for C and C++. There are more unit testing frameworks in existence than we can ever review, so again this list is not exhaustive. If nothing jumps out at you in this list, keep looking to find one that fits your team’s development style.

We already mentioned the Throw the Switch C unit testing frameworks:Unity, Cmock, and Ceedling.

Cmocka is the C unit testing framework we started with. Cmocka ships with built-in mock object support and operates similarly to Unity & CMock. The framework is built with C standard library functions and works well for embedded systems testing.

Catch appears to be the most popular C++ unit testing framework. Catch is a header-only library which supports C++11, C++14, and C++17.

Doctest is the unit test framework we use for EA’s C++ embedded framework project. Doctest is similar to Catch and is also header-only. Our favorite attribute of Doctest is that it keeps the test code alongside the implementation code. Doctest also enables you to write tests in headers, which Catch does not support.

GoogleTest is Google's C++ unit testing framework. GoogleTest is one of the few C++ frameworks with built-in mocking support.

CppUTest is a C++ test suite that was designed with embedded developers in mind. This framework is featured in James Grenning's book Test-Driven Development for Embedded C. C++ features within the framework are kept to a minimum enabling it to be used for both C and C++ unit testing. CppUTest has built-in support for mocking.

If you're interested in mock object support for C++, check out GoogleMock, Trompeloeil, and FakeIt. Each of these mocking frameworks can be integrated with the unit test frameworks mentioned above.

Other Resources

If you're brand-new to TDD, read through this walkthrough to get a sense of the approach:

Test Driven Development: Example Walkthrough

Steve Branam, who writes at Flink and Blink, has written posts on testing:

Steve recommended Jeff Langr's Modern C++ Programming with Test-Driven Development: Code Better, Sleep Better as another resource for embedded C++ developers. .

Embedded Debugger-Based Testing

We always want to run as many tests as possible on a host PC when unit testing embedded systems code. However, we can’t test every aspect of our system on a host machine. Before shipping the final system, we need to evaluate the target compiler, issues which only present themselves on the target (e.g. endianness, timing), and the actual target hardware and hardware interfaces.

Metal.test is a framework which can help us with on-target testing. Klemens Morgenstern, an independent contractor and consultant, maintains this project. Metal.test enables automated execution of remote code using debugger hardware, such as J-LINK or ST-Link. The project supports gdb, and he has lldb support planned for a future release.

Metal.test features:

  • I/O Forwarding
  • Code Coverage
  • Unit testing
  • Call tracing
  • Profiling
  • Function Stubbing at link-time
  • Extensive documentation (in the GitHub repository Wiki)

Metal.test also includes a plugin system. While it's not essential, plugin support enables developers to extend the functionality to support any use case that a debugger can support.

Klemens is looking for feedback on metal.test. Don't hesitate to reach out with questions, issues, or other feedback.

Phil Koopman on Testing and Software Quality

Phil Koopman is a professor at Carnegie Mellon University. He also runs the Better Embedded Software blog, which is a must-read for embedded developers.

Phil has produced an immense and invaluable body of work, much of it focused on embedded software quality. The lecture notes for his embedded systems courses are available online, and he posts lecture videos on his Youtube channel.

Here's a selection of his lectures related to testing and software quality:

Did We Miss Anything?

If you have a testing resource that you love, leave us a comment below and we will add it to the list!

Change Log

  • 20190302:*
    • Added Phil Koopman lecture series on testing
    • Improved grammar
    • Improved table of contents

Matt Chernosky and Electron Vector

Matt Chernosky, who runs the Electron Vector blog, is an invaluable resource for embedded unit testing and TDD. If you are having a hard time getting started with unit testing and TDD, Matt's articles provide a straightforward and accessible approach.

Here are some of our favorite articles from Matt's blog:

Matt published a free guide for using Ceedling for TDD . If you are experienced with TDD but are looking to improve your Ceedling skills, check out his Ceedling Field Manual.

Throw the Switch

Throw the Switch created the Ceedling , Unity , and Cmock unit testing suite. This is the trio of tools that Matt Chernosky uses and writes about.

Aside from creating testing tools, Throw the Switch maintains a library of test-related articles and a course called Unit Testing & Other Embedded Software Catalysts . Additional courses related to unit testing for embedded systems are being developed.

Unit Testing Frameworks

Listed below are frequently recommended unit testing frameworks for C and C++. There are more unit testing frameworks in existence than we can ever review, so again this list is not exhaustive. If nothing jumps out at you in this list, keep looking to find one that fits your team’s development style.

We already mentioned the Throw the Switch C unit testing frameworks: Unity , Cmock , and Ceedling .

Cmocka is the C unit testing framework we started with. Cmocka ships with built-in mock object support and operates similarly to Unity & CMock. The framework is built with C standard library functions and works well for embedded systems testing.

Catch appears to be the most popular C++ unit testing framework. Catch is a header-only library which supports C++11, C++14, and C++17.

Doctest is the unit test framework we use for EA’s C++ embedded framework project. Doctest is similar to Catch and is also header-only. Our favorite attribute of Doctest is that it keeps the test code alongside the implementation code. Doctest also enables you to write tests in headers, which Catch does not support.

GoogleTest is Google's C++ unit testing framework. GoogleTest is one of the few C++ frameworks with built-in mocking support.

CppUTest is a C++ test suite that was designed with embedded developers in mind. This framework is featured in James Grenning's book _ Test-Driven Development for Embedded C _. C++ features within the framework are kept to a minimum enabling it to be used for both C and C++ unit testing. CppUTest has built-in support for mocking.

If you're interested in mock object support for C++, check out GoogleMock , Trompeloeil , and FakeIt . Each of these mocking frameworks can be integrated with the unit test frameworks mentioned above.

Other Resources

If you're brand-new to TDD, read through this walkthrough to get a sense of the approach:

Steve Branam, who writes at Flink and Blink , has written a few posts on testing:

Steve recommended Jeff Langr's Modern C++ Programming with Test-Driven Development: Code Better, Sleep Better as another resource for embedded C++ developers.

Embedded Debugger-Based Testing

We always want to run as many tests as possible on a host PC when unit testing embedded systems code. However, we can’t test every aspect of our system on a host machine. Before shipping the final system, we need to evaluate the target compiler, issues which only present themselves on the target (e.g. endianness, timing), and the actual target hardware and hardware interfaces.

Metal.test is a framework which can help us with on-target testing. This project is maintained by Klemens Morgenstern, an independent contractor and consultant. Metal.test enables automated execution of remote code using debugger hardware, such as J-LINK or ST-Link. The project currently supports gdb, and lldb support is planned for a future release.

Metal.test features:

  • I/O Forwarding
  • Code Coverage
  • Unit testing
  • Call tracing
  • Profiling
  • Function Stubbing at link-time
  • Extensive documentation (in the GitHub repository Wiki )

Metal.test also includes a plugin system. While it's not essential, plugin support enables developers to extend the functionality to support any use case that a debugger can support.

Klemens is looking for feedback on metal.test . Don't hesitate to reach out with questions, issues, or other feedback.

Phil Koopman on Testing and Software Quality

Phil Koopman is a professor at Carnegie Mellon University . He also runs the Better Embedded Software blog , which is a must-read for embedded developers.

Phil has produced an immense and invaluable body of work, much of it focused on embedded software quality. The lecture notes for his embedded systems courses are available online, and he regularly posts lecture videos on his Youtube channel .

Here's a selection of his lectures that are related to testing and software quality:

Did We Miss Anything?

If you have a testing resource that you love, leave us a comment below and we will add it to the list!

Related Articles

CMocka: Enabling XML Output

I've been adding unit tests to my C-based projects with the CMocka testing framework. Now that I have tests defined, I want my build server to run the tests and track the results for each build. Jenkins requires test results to be reported in the JUnit XML format. Luckily, CMocka has built in XML support that's quite easy to enable.

Enabling XML Output

The first step in enabling XML output is to set the CMocka output format. The easiest approach is to set the output format in your test program. This API must be called before any tests are run:

cmocka_set_message_output(CM_OUTPUT_XML);

You can also change CMocka's output behavior using the CMOCKA_MESSAGE_OUTPUT environment variable. Note that CMOCKA_MESSAGE_OUTPUT has higher precedence than the cmocka_set_message_output API, so you can override a test program's default formatting behavior.

Possible values for CMOCKA_MESSAGE_OUTPUT are stdout, subunit, tab, or xml. Options are case-insensitive.

Configuring XML Output Destination

By default, XML output will be printed to stderr. In order to write the results to a file, you must define the environment variable CMOCKA_XML_FILE. If the file already exists or cannot be created, CMocka will output the results to stderr instead.

CMOCKA_XML_FILE=testresults/libc.xml buildresults/test/libc.bin

XML Output With Multiple Test Groups

I like to organize my tests into multiple groups. This causes CMocka to generate invalid XML, as too many top level tags exist in the file. Jenkins will fail to parse the file, and you will also find that tools like xmllint will report errors.

To work around this, you can enable CMocka to generate a separate XML file for each test group. Simply include "%g" in the filename definition. The "%g" characters will be replaced by the group name.

CMOCKA_XML_FILE=testresults/%g.xml

A Note on CMocka API Usage With XML Results

You must use the cmocka_run_group_tests or cmocka_run_group_tests_name APIs to successfully generate XML test results. All other testing APIs will result in summary output instead of XML output.

Putting it All Together

Here's a sample from the Embedded Artistry libc repository. I place my test results inside of my buildresults directory, which is untracked and gets cleaned by the build server.

.PHONY: test
test:
ifeq ("$(wildcard buildresults/testresults/)","")
    @mkdir -p buildresults/testresults
else
    @rm -f buildresults/testresults/*
endif
    @CMOCKA_XML_FILE=buildresults/testresults/%g.xml buildresults/x86_64_debug/test/libc.bin

Further Reading