Jenkins

Unit Testing and Reporting on a Build Server using Ceedling and Unity

Today we have another guest post by Paul Shepherd, this time covering integration of Ceedling and Unity with the build status reporting.

Paul is the Lead Electrical Engineer at Benchmark Space Systems. He has spent time in the Aerospace, Consumer Audio, and Integrated Circuits fields, more often than not working at the intersection of hardware and software. You can contact him via LinkedIn.


In the last post, we shared a method for implementing custom build steps in the Eclipse IDE. We used this method at Benchmark Space Systems to add a firmware version to our project

Unit testing is another best practice that we have embraced, and we are working to cover as much of our embedded code as possible. We chose Ceedling+Unity for our unit testing framework, in part because of its strong integration with the Eclipse IDE. Ceedling works well on our developer workstations in Eclipse and command prompts, and it was also straight forward to get running on our build server (Jenkins running on Ubuntu). This post focuses on the less straightforward step of capturing the unit testing results and reporting them to Jenkins.

If you are new to Ceedling (or even unit testing in general, like I am) I recommend Matt Chernosky’s eBook A Field Manual for Ceedling. His field manual enabled us to quickly understand and start using these testing tools. I’m also reading James Grenning’s book Test Driven Development for Embedded C.

Ceedling is still pre-1.0 as of February 2019. While it is quite capable, there are some areas lacking documentation, which required us to tinker under the hood to complete our integration. 

An important caveat: this blog post reflects Ceedling 0.28.3, released on 8th August, 2018. A PR has been submitted to add coverage reporting via xml output, but this blog shows how to hack the 0.28.3 branch to get this working. The blog post will be updated when that PR has been merged to master.

The content in this blog post was developed based on the following software versions:

  • Ceedling: 0.28.3

  • CException: 1.3.1.18

  • CMock: 2.4.6.217

  • Unity: 2.4.3.122

Running a Ceedling test on the local development workstation.

Running a Ceedling test on the local development workstation.

Running and Exporting Test Results from Ceedling

The documentation and how-to articles available for Ceedling do an excellent job of getting you to the point of running tests. There are a few additional steps needed to collect and post results during the Jenkins pipeline build process.

First, the Jenkinsfile was updated to run the test as a new build stage:

stage('Unit testing')
{
     steps
     {
          sh 'ceedling’
     }
}

Running this shell command is sufficient to report overall unit testing status because Ceedling returns an exit code based on the test results. However, if a test fails, you must manually hunt through the build log to determine the cause. Jenkins has a nice interface for reporting test results and highlighting test failures, but an XML with the test result data must be captured during the build process.

In order to post the Ceedling test results in XML format, a new line must be added into the Ceedling’s project-specific configuration file, project.yml:

:plugins:
  :enabled:
    - xml_test_reports # <--- this line has been added
    - stdout_gtestlike_tests_report

Once the xml_test_reports argument is added to the plugins section of the configuration file, a report.xml file will be generated in the ($BUILD_DIR)/artifacts/test/ directory.

In order to parse the test results, you will need to install the xUnit plugin. A custom XML formatting style sheet is also required. We use the Jenkins-unity-xml-formatter.

The unity.xsl file can be placed anywhere in the project directory tree. The xUnit command in the Jenkinsfile must reference this file relative to the project root directory ($(PROJECT_DIR)).

We then add a post step in the Unit Testing Pipeline stage to capture these results:

stage('Unit testing')
{
     steps
     {
          sh 'ceedling’
     }
     post
     {
          always
          {
                xunit tools: [Custom(customXSL: 'unity.xsl', pattern: 'build/artifacts/gcov/report.xml', skipNoTestFiles: false, stopProcessingIfError: true)]
          }
     }
}

Generating a Code Coverage Report

Several steps are necessary to generate and post the test coverage data. The gcov plugin must be enabled in the project.yml file to generate code coverage data:

:plugins:
  :enabled:
    - gcov # <--- this line has been added
    - xml_test_reports        
    - stdout_gtestlike_tests_report

Once the gcov plugin has been enabled, it can be called at the command line by appending gcov:all to the ceedling command.

Code coverage info appended to code test results.

Code coverage info appended to code test results.

Unfortunately, this doesn’t actually generate the test report file. Ceedling implements the gcov functionality internally, but to create a report from this data, the Gcvor tool must be installed.

Once gcovr is installed, we add another line specifying the reporting type to the project.yml file:

:gcov:
 :html_report_type:  detailed

Note that the :gcov: section should be defined at the top level. It is not a subsection of anything else in the project.yml file.

Now that gcov and reporting are enabled in the project.yml file, we can generate a coverage report by adding an additional parameter to the Ceedling command line invocation.

$ ceedling gcov:all utils:gcov

Although this looks a bit repetitive, both parameters are necessary: gcov:all runs the test coverage analysis, and utils:gcov calls Gcovr to generate a report in HTML format.

Ceedling’s gcov plugin will only generate an html report unless we hack the internal plugin configuration. In order to use Gcovr to generate a Cobertura-style xml report, two files must be edited.

To add XML report generation, open the file ($PROJECT_DIR)/vendor/ceedling/plugins/gcov/config/defaults.yml. In the gcov_post_report_advanced section, the --xml argument must be added to the gcovr command, and the --html and --html-reports arguments must be removed.

Modifications to the defaults.yml file to enable XML report generation.

Modifications to the defaults.yml file to enable XML report generation.

Next, open file the file ($PROJECT_DIR)/vendor/ceedling/plugins/gcov/lib/gcov_constants.rb. Update the GCOV_ARTIFACTS_FILE variable to have a file extension of .xml instead of .html.

Modifications to the gcov_constants.rb file to enable XML report generation.

Modifications to the gcov_constants.rb file to enable XML report generation.

These edits are superseded by a Pull Request in the Ceedling repo, but will be necessary until the PR is merged into master.

Parsing the code coverage report

Gcovr outputs a Cobertura-compliant xml report which Jenkins can parse with the Cobertura plugin

Our unit testing pipeline step is updated to use the new Ceedling invocation and to capture the code coverage results:

stage('Unit testing')
{
  steps
  {
    sh 'ceedling gcov:all utils:gcov'
  }
  post
  {
    always
    {
        xunit tools: [Custom(customXSL: 'unity.xsl', pattern: 'build/artifacts/gcov/report.xml', skipNoTestFiles: false, stopProcessingIfError: true)]

        cobertura coberturaReportFile: 'build/artifacts/gcov/GcovCoverageResults.xml'
    }
  }
}

There are many arguments that you can add to your xUnit and Cobertura pipeline steps in order to set healthy/unhealthy boundaries. Cobertura naturally uses % of total for its metrics, but for xUnit, you must specify ‘thresholdMode:2’ in order for the tool to work in % of tests instead of absolute numbers. For unit testing, I feel that relative measures, rather than absolute measures, are a much better view of the overall health of your codebase.

Finally, we see the test results and code coverage reports summarized on the build status page.

Detailed report outputs are available as links from the individual build page.

Detailed report outputs are available as links from the individual build page.

Our experience has been that bringing these metrics to our build status page keeps us motivated and simplifies communicating our work status to our stakeholders. I hope that the information we shared here is useful to you in improving your own continuous integration process. I’d also like to thank the team at Embedded Artistry for allowing me to share these tips on their blog.

Further Reading

Related Articles

Related Books

Seeing Intermittent GitHub Clone Failures on Jenkins? Check Your Repo Size

One of my clients noticed occasional build failures while using Jenkins. It was a strange situation, as their builds would suddenly see a burst of failures with no apparent change. I have been using the same Jenkins setup internally for the past year, and I have never observed such behavior.

Their software builds for three different configurations using the same repository. To support these configurations, the build server runs 3 different nightly builds and three continuous integration builds. Nightly builds are run from scratch, including the clone cycle. CI builds will utilize an existing environment where possible (e.g. CI for master). CI builds will also perform a clone if it is a new PR that is being built.

While digging in to the failures, I noticed that it tied to multiple PRs being submitted within a short period of time. Since each build failure occurred as a git clone timeout, I was suspicious of GitHub throttling.

At first I thought we were making too many API requests, but we were well within GitHub's generous limit. I then noticed that their repository was 245MB in size, and became worried about GitHub throttling our downloads. Each new PR triggers three CI builds, which results in 245MB downloads on each server. If multiple PRs are submitted in a short span of time, I could definitely see GitHub cutting off our bits.

Further research led me to this GitHub issue which described a very similar situation, also due to large repo sizes and downloads.

To combat throttling problems with large repositories, I recommend the following settings for each build:

  1. Increase the timeout for clone/checkout operations to give yourself leeway in throttling situations (30-45min)
  2. Enable shallow clone with a depth of 1 to reduce download sizes

By applying these two changes, the intermittent clone failures were eliminated.

Screen Shot 2018-01-30 at 09.24.31.png

Jenkins: Configuring a Linux Slave Node

Most Jenkins instances start out by using only a single build node. Many teams will quickly outgrow this limit for any number of reasons, such as:

  • Increasing build throughput due to long build times
  • Increasing the number of concurrent builds that can run
  • Building and testing software on multiple architectures (e.g. x86, ARM)
  • Building and testing software for multiple OSes (Windows, Linux, OSX)
  • Creating a dedicated node which can run tests on hardware

The steps below describe the configuration process for bringing up new Linux-based nodes for use with the Jenkins build server. These steps assume that Linux is already installed on your machine.

  1. Install Dependencies
  2. Enable SSH
  3. Create new SSH Keys
  4. Add SSH Keys to GitHub
  5. Authorize Jenkins Master SSH Connections
  6. Assign a Static IP Address
  7. Create a Jenkins Directory
  8. Configuring the Node in Jenkins

Install Dependencies

In order to use your node as a Jenkins slave, you will need to install the following initial dependencies:

sudo apt-get install default-jre git

If your project requires additional dependencies, install them as well.

Enable SSH

My preferred method for connecting Jenkins to slave nodes is SSH.

With most new Linux installations, SSH is not enabled by default. You will need to install the openssh-server package:

sudo apt-get install openssh-server

Please substitute with your particular package manager if you are not using apt-get.

Create New SSH keys

Generate an SSH key for your build node:

ssh-keygen -t rsa -b 4096 -C "node-name"

You can specify a password, but note it down for later reference. The password will be needed when adding the Jenkins SSH key credential.

Add SSH Keys to GitHub

If you're using GitHub, you want to add your machine's SSH key to the GitHub account used by your build server. By adding the SSH key to your GitHub account, the node will be able to check out your repositories.

For most server installs I use a standalone bot account that manages the Jenkins builds. I also like to use separate keys for each node to have more granular control over access: revoking one key won't take down all my build nodes.

Follow these steps to add your SSH key to GitHub:

Authorize Jenkins Master SSH Connections

You must add the SSH key for the jenkins user on the master node to the authorized_keys file. The Jenkins master node will use SSH to connect to the slave nodes.

You will need to copy the contents of the Jenkins master's public SSH key. Then, use the following commands to add the key to the authorized_keys file:

vi ~/.ssh/authorized_keys
(paste contents)

If authorized_keys has been created for the first time, be sure to set the appropriate permissions:

chmod 600 ~/.ssh/authorized_keys

Assign a Static IP Address

In order to ensure reliable use of our slave node, it needs to have a static IP address on the local network. The instructions for this vary greatly depending on your router, so you'll need to Google for those instructions.

In order to assign a static IP address, you will need to know the MAC address for the correct link. If your device is connected over Wi-Fi, use the Wi-Fi adapter's MAC. If it's connected over Ethernet, use the Ethernet MAC.

You can use the ifconfig command to print information about each network interface available on your machine. Look for the HWaddr entry (e.g. HWaddr AF:c6:92:10:da:2f) and use that address to assign your static IP address.

Keep the assigned IP address handy for the final configuration step.

Create a Jenkins Directory

Jenkins will need a workspace directory for executing the slave node processes and storing files.

Create a jenkins directory wherever you'd like - just make sure it is writable without sudo.

Example:

mkdir ~/jenkins

Keep this directory location handy for the next step.

Configuring the Node in Jenkins

To configure a new node, navigate to "Manage Jenkins" in the classic Jenkins interface or "Administration" in Blue Ocean. Select "Manage Nodes", then "New Node".

We want to configure a new "Permanent Agent", though you can also copy an existing slave job and replace the appropriate values.

Here are the settings to configure:

  • Number of executors: 1 or more
    • Select the number of concurrent jobs that should be allowed to run on this node
  • Remote root: Use the Jenkins workspace directory that you created
  • labels: Add any descriptive labels you want to use
    • Examples: linux, build-x, x86
  • Select Use this node as much as possible
  • Method: Launch slave agents via SSH
  • Host: Use the static ip address of your new slave node
  • Credentials: Select the Jenkins master private ssh key
  • Verification strategy: non-verifying
    • You are free to use more secure methods!
  • Select Keep this agent online as much as possible

If everything was configured correctly, the node can be brought online and Jenkins can start assigning jobs!

Further Reading