Unit Testing and Reporting on a Build Server using Ceedling and Unity

Today we have another guest post by Paul Shepherd, this time covering integration of Ceedling and Unity with the build status reporting.

Paul is the Lead Electrical Engineer at Benchmark Space Systems. He has spent time in the Aerospace, Consumer Audio, and Integrated Circuits fields, more often than not working at the intersection of hardware and software. You can contact him via LinkedIn.


In the last post, we shared a method for implementing custom build steps in the Eclipse IDE. We used this method at Benchmark Space Systems to add a firmware version to our project

Unit testing is another best practice that we have embraced, and we are working to cover as much of our embedded code as possible. We chose Ceedling+Unity for our unit testing framework, in part because of its strong integration with the Eclipse IDE. Ceedling works well on our developer workstations in Eclipse and command prompts, and it was also straight forward to get running on our build server (Jenkins running on Ubuntu). This post focuses on the less straightforward step of capturing the unit testing results and reporting them to Jenkins.

If you are new to Ceedling (or even unit testing in general, like I am) I recommend Matt Chernosky’s eBook A Field Manual for Ceedling. His field manual enabled us to quickly understand and start using these testing tools. I’m also reading James Grenning’s book Test Driven Development for Embedded C.

Ceedling is still pre-1.0 as of February 2019. While it is quite capable, there are some areas lacking documentation, which required us to tinker under the hood to complete our integration. 

An important caveat: this blog post reflects Ceedling 0.28.3, released on 8th August, 2018. A PR has been submitted to add coverage reporting via xml output, but this blog shows how to hack the 0.28.3 branch to get this working. The blog post will be updated when that PR has been merged to master.

The content in this blog post was developed based on the following software versions:

  • Ceedling: 0.28.3

  • CException: 1.3.1.18

  • CMock: 2.4.6.217

  • Unity: 2.4.3.122

Running a Ceedling test on the local development workstation.

Running a Ceedling test on the local development workstation.

Running and Exporting Test Results from Ceedling

The documentation and how-to articles available for Ceedling do an excellent job of getting you to the point of running tests. There are a few additional steps needed to collect and post results during the Jenkins pipeline build process.

First, the Jenkinsfile was updated to run the test as a new build stage:

stage('Unit testing')
{
     steps
     {
          sh 'ceedling’
     }
}

Running this shell command is sufficient to report overall unit testing status because Ceedling returns an exit code based on the test results. However, if a test fails, you must manually hunt through the build log to determine the cause. Jenkins has a nice interface for reporting test results and highlighting test failures, but an XML with the test result data must be captured during the build process.

In order to post the Ceedling test results in XML format, a new line must be added into the Ceedling’s project-specific configuration file, project.yml:

:plugins:
  :enabled:
    - xml_test_reports # <--- this line has been added
    - stdout_gtestlike_tests_report

Once the xml_test_reports argument is added to the plugins section of the configuration file, a report.xml file will be generated in the ($BUILD_DIR)/artifacts/test/ directory.

In order to parse the test results, you will need to install the xUnit plugin. A custom XML formatting style sheet is also required. We use the Jenkins-unity-xml-formatter.

The unity.xsl file can be placed anywhere in the project directory tree. The xUnit command in the Jenkinsfile must reference this file relative to the project root directory ($(PROJECT_DIR)).

We then add a post step in the Unit Testing Pipeline stage to capture these results:

stage('Unit testing')
{
     steps
     {
          sh 'ceedling’
     }
     post
     {
          always
          {
                xunit tools: [Custom(customXSL: 'unity.xsl', pattern: 'build/artifacts/gcov/report.xml', skipNoTestFiles: false, stopProcessingIfError: true)]
          }
     }
}

Generating a Code Coverage Report

Several steps are necessary to generate and post the test coverage data. The gcov plugin must be enabled in the project.yml file to generate code coverage data:

:plugins:
  :enabled:
    - gcov # <--- this line has been added
    - xml_test_reports        
    - stdout_gtestlike_tests_report

Once the gcov plugin has been enabled, it can be called at the command line by appending gcov:all to the ceedling command.

Code coverage info appended to code test results.

Code coverage info appended to code test results.

Unfortunately, this doesn’t actually generate the test report file. Ceedling implements the gcov functionality internally, but to create a report from this data, the Gcvor tool must be installed.

Once gcovr is installed, we add another line specifying the reporting type to the project.yml file:

:gcov:
 :html_report_type:  detailed

Note that the :gcov: section should be defined at the top level. It is not a subsection of anything else in the project.yml file.

Now that gcov and reporting are enabled in the project.yml file, we can generate a coverage report by adding an additional parameter to the Ceedling command line invocation.

$ ceedling gcov:all utils:gcov

Although this looks a bit repetitive, both parameters are necessary: gcov:all runs the test coverage analysis, and utils:gcov calls Gcovr to generate a report in HTML format.

Ceedling’s gcov plugin will only generate an html report unless we hack the internal plugin configuration. In order to use Gcovr to generate a Cobertura-style xml report, two files must be edited.

To add XML report generation, open the file ($PROJECT_DIR)/vendor/ceedling/plugins/gcov/config/defaults.yml. In the gcov_post_report_advanced section, the --xml argument must be added to the gcovr command, and the --html and --html-reports arguments must be removed.

Modifications to the defaults.yml file to enable XML report generation.

Modifications to the defaults.yml file to enable XML report generation.

Next, open file the file ($PROJECT_DIR)/vendor/ceedling/plugins/gcov/lib/gcov_constants.rb. Update the GCOV_ARTIFACTS_FILE variable to have a file extension of .xml instead of .html.

Modifications to the gcov_constants.rb file to enable XML report generation.

Modifications to the gcov_constants.rb file to enable XML report generation.

These edits are superseded by a Pull Request in the Ceedling repo, but will be necessary until the PR is merged into master.

Parsing the code coverage report

Gcovr outputs a Cobertura-compliant xml report which Jenkins can parse with the Cobertura plugin

Our unit testing pipeline step is updated to use the new Ceedling invocation and to capture the code coverage results:

stage('Unit testing')
{
  steps
  {
    sh 'ceedling gcov:all utils:gcov'
  }
  post
  {
    always
    {
        xunit tools: [Custom(customXSL: 'unity.xsl', pattern: 'build/artifacts/gcov/report.xml', skipNoTestFiles: false, stopProcessingIfError: true)]

        cobertura coberturaReportFile: 'build/artifacts/gcov/GcovCoverageResults.xml'
    }
  }
}

There are many arguments that you can add to your xUnit and Cobertura pipeline steps in order to set healthy/unhealthy boundaries. Cobertura naturally uses % of total for its metrics, but for xUnit, you must specify ‘thresholdMode:2’ in order for the tool to work in % of tests instead of absolute numbers. For unit testing, I feel that relative measures, rather than absolute measures, are a much better view of the overall health of your codebase.

Finally, we see the test results and code coverage reports summarized on the build status page.

Detailed report outputs are available as links from the individual build page.

Detailed report outputs are available as links from the individual build page.

Our experience has been that bringing these metrics to our build status page keeps us motivated and simplifies communicating our work status to our stakeholders. I hope that the information we shared here is useful to you in improving your own continuous integration process. I’d also like to thank the team at Embedded Artistry for allowing me to share these tips on their blog.

Further Reading

Related Articles

Related Books

Hypotheses on Systems and Complexity

A famous John Gall quote from Systemantics became known as Gall's Law. The law states:

A complex system that works is invariably found to have evolved from a simple system that worked. The inverse proposition also appears to be true: A complex system designed from scratch never works and cannot be made to work. You have to start over, beginning with a working simple system.

I've always felt the truth of this idea. Gall's Law inspired me to think about the evolution of complexity in systems from different perspectives. I've developed five hypotheses in this area:

  1. A simple system that works (and is maintained) will inevitably grow into a complex system.
  2. The tendency of the Universal System is a continual increase in complexity.
  3. A simple system must increase in complexity or it is doomed to obsoletion and/or failure.
  4. A system's complexity level starts at the complexity of the local system/environment in which it participates.
  5. A working system will eventually collapse due to unmanageable complexity.

I call these ideas "hypotheses" because they are born of late-night thoughts while watching my newborn child. They have not been put through sufficient research or testing for me to call them "axioms", "laws", or "rules of thumb". These ideas may already exist in the systems cannon, but I have not yet encountered them.

The Hypotheses in Detail

Let's look at each of these hypotheses in turn, then we can discuss their implications for our projects.

Hypothesis 1: Simple Systems Become Complex

My first hypothesis is fully stated as follows:

A simple system that works (and is maintained) will inevitably grow into a complex system.

This is a restatement of Gall's Law from a different perspective. I believe that a working simple system is destined to become more complex.

This hypothesis is opposed to another systems maxim (quoted from Of Men and Laws):

A working system (and by happy accident, systems sometimes work) should be left alone.

Unfortunately, this recommendation is untenable for two reasons:

  1. Human beings are not disciplined enough to leave a working system alone.
  2. If a working system is not maintained, it will inevitably become obsolete according to Hypothesis 3.

Humans are the ultimate tinkerers. We are never satisfied with the status quo. We have the tendency to expand or modify a system's features and behaviors once we consider it to be "working" (and even if it's not working). Our working systems are destined to increase in complexity thanks to our endless hunger.

Hypothesis 2: Universal complexity is always increasing

My second hypothesis is fully stated as follows:

The tendency of the Universal System is a continual increase in complexity.

At its core, I believe that Hypothesis 2 is simply a restatement of the Second Law of Thermodynamics, but I include it for use with other hypotheses below.

The Second Law of Thermodynamics states that the total entropy of an isolated system can never decrease over time. Thanks to the Second Law of Thermodynamics, all processes in the universe trigger an irreversible increase in the total entropy of a system and its surroundings.

Rudolf Clausius provides us with another perspective on the Second Law of Thermodynamics:

[...] we may express in the following manner the fundamental laws of the universe which correspond to the two fundamental theorems of the mechanical theory of heat.

  1. The energy of the universe is constant.
  2. The entropy of the universe tends to a maximum.

I have an inkling that complexity and entropy are closely related concepts, if not actually the same. As such, I assume that the complexity of the Universal System will increase over time.

The reason that I think complexity increases over time is that I can observe this hypothesis in other sciences and directly in the world around me:

  • After the big bang, simple hydrogen coalesced into stars (and planets and solar systems and galaxies), forming increasingly complex elements as time progressed
  • Life progressed from simple single-celled organisms to complex networked species consisting of hundreds of sub-systems
  • Giving birth progressed from a natural, body-driven affair to one of complex rituals that is carried out by a large team of experts at great cost in specialized locations (i.e., hospitals)
  • Finance has progressed from exchanging metal coins and shells to a complex, automated, digitized, international system of rules and cooperating systems

Corollary: Complexity must be preserved

The idea exists that complexity can be reduced:

An evolving system increases its complexity unless work is done to reduce it.
-- Meir Lehman

Or:

Ongoing development is the main source of program growth, but programs are also entropic. As they age, they tend to become more cluttered. They get larger and more complicated unless pressure is applied to make them simpler.
-- Jerry Fitzpatrick

Because of the Second Law of Thermodynamics, we cannot reverse complexity. We are stuck with the existing environment, requirements, behaviors, expectations, customers, resources, etc.

Energy must be invested to perform any "simplification" work, which means that there is a complexity-entropy increase in some part of the system. Perhaps you successfully "simplified" your product's hardware design so that it's easier to assemble in the factory. What other sub-systems saw increased complexity as a result: supply chain, tooling design, engineering effort, mechanical design, repairability?

Complexity must be preserved - we only move it around within the system.

Hypothesis 3: Simple Systems Must Evolve

Hypotheses 1 and 2 combine into a third hypothesis:

A simple system must increase in complexity or it is doomed to obsoletion and/or failure.

The systems we create are not isolated; they are always interconnected with other systems. And as one of John Gall's "Fundamental Postulates of General Systemantics" states, "Everything is part of a larger system."

The Universal System is always increasing in complexity-entropy, as are all subsystems by extension. Because of the ceaseless march toward increased complexity, systems are forced to adapt to changes in the complexity of the surrounding systems and environment. Any system which does not evolve will eventually be unable to cope with the new level of complexity and will implode.

The idea of "code rot" demonstrates this idea:

Software rot, also known as code rot, bit rot, software erosion, software decay or software entropy is either a slow deterioration of software performance over time or its diminishing responsiveness that will eventually lead to software becoming faulty, unusable, or otherwise called "legacy" and in need of upgrade. This is not a physical phenomenon: the software does not actually decay, but rather suffers from a lack of being responsive and updated with respect to the changing environment in which it resides.

I've seen it happen enough on my own personal projects. You can take a working software project without errors, put it into storage, pull it out years later, and it will no longer compile and run. This could be for any number of reasons: the language changed, the compiler is no longer available, libraries or tooling needed to build and use the software is no longer available, the underlying processor architectures have changed, etc.

Our "simple" systems will never truly remain so. They must be continually updated to remain relevant.

Hypothesis 4: "Simple" is Determined by Local Complexity

Hypothesis 2 drives the fourth hypothesis:

A system's complexity level starts at the complexity of the local system/environment in which it participates.

Stated in another way:

A system cannot have lower complexity than the local system in which it will participate.

Hypothesis 2 indicates that a local (and universal) lower bound for simplicity exists. Stated another way, your system has to play by the rules of other systems it interacts with. The more external systems your system must interact with, the more complex the starting point.

We can see this by looking at the world around us. Consider an example of payment processing. You can't start over with a "simple" payment application: the global system is to complex and has too many specific requirements. There are banking regulations, credit card regulations, security protocols, communication protocols, authentication protocols, etc. Your payment processor must work with the existing banking ecosystem.

Now, you could ignore these requirements and create a new payment system altogether (e.g., Bitcoin), but you are not actually participating in the same local system (international banking). Even still, the Universal System's complexity is higher than your system's local complexity, and players know the game. You can skip the authentication requirements or other onerous burdens, but external actors can still take advantage of your system (e.g., Bitcoin thefts, price manipulation, lost keys leading to un-claimable money).

Once complexity has developed, we are stuck with it. We can never return to simplicity. I can imagine a time when the Universal System's complexity level will be so high that humans will no longer have the capacity to create or manage any systems.

Hypothesis 5: Working Systems Eventually Collapse

Hypothesis 5 is fully stated as follows:

A working system will eventually collapse due to unmanageable complexity.

Complexity is always increasing, and there is nothing we can do to stop it. There are two complexity-related failure modes for our system:

  1. Our system becomes so complex that we can no longer maintain it (there are no humans who can understand and master the system)
  2. Our system cannot adapt fast enough to keep up with the local/universal system's increases in complexity

While we cannot forever prevent the collapse of our system, we can impact the timeframe through system design and complexity management efforts. We can strive to reduce the rate of complexity increase to a minimal amount. However, as the complexity of the system increases, the effort required to sustain the system also increases. As time goes on, our systems require more energy to be spent on documentation, hiring, training, refactoring, and maintenance.

We can see systems all around us which become too complex to truly understand (e.g., the stock market). Unfortunately, Western governments seem to be reaching a complexity breaking point, as they have become so complex they can't enact policy. To quote Matt Levine's Money Stuff newsletter:

What if your model is that democratic political governance has just stopped working—not because you disagree with the particular policies that particular elected governments are carrying out, but because you have started to notice that elected governments in large developed nations are increasingly unable to carry out any policies at all?

Perhaps unmanageable complexity doomed the collapsed civilizations that preceded us. Given that thought, what is the human race's limit on complexity management? We've certainly extended our ability to handle complexity through the development of computers and algorithms, but there will come a time when the complexity is too much for us to handle.

Harnessing these ideas

These five hypotheses are one master hypothesis broken into different facets which we can analyze. The overall hypothesis is:

The Second Law of Thermodynamics tells us that our systems are predestined to increase in complexity until they fail, become too complex to manage, or are made obsolete. We can manage the rate of increase of complexity, but never reverse it.

The hypotheses described herein do not contradict the idea that our systems should be kept as simple as possible. Simplicity is still an essential goal. However, we must realize that the increase in complexity is inevitable and irreversible. We must actively work to prevent complexity from increasing faster than we can manage it.

Here are some key implications of these ideas for system builders:

  • If your system isn’t continually evolving and increasing in complexity, it will collapse
  • You can extend the lifetime of your system by investing energy to manage system complexity
  • You can extend the lifetime of your system by continually introducing and developing new acolytes who understand and can maintain your system
    • This enables collective management of complexity and transfer of knowledge about the system
  • You can extend the lifetime of your system by giving others the keys to understanding your system (documentation, training)
    • This enables others to come to terms with the complexity of your system
  • You can never return to "simplicity" - don't consider a "total rewrite" effort unless you are prepared to scrap the entire system and begin again
  • These hypotheses speak to why documentation becomes such a large burden
    • Documentation becomes part of the overall system's complexity, requiring a continual increase in resources devoted to managing it

Developing a skillset in Complexity Management is essential for system designers and maintainers.

Further Reading

Related Articles

Related Books