How-to Guide

Seeing Intermittent GitHub Clone Failures on Jenkins? Check Your Repo Size

One of my clients noticed occasional build failures while using Jenkins. It was a strange situation, as their builds would suddenly see a burst of failures with no apparent change. I have been using the same Jenkins setup internally for the past year, and I have never observed such behavior.

Their software builds for three different configurations using the same repository. To support these configurations, the build server runs 3 different nightly builds and three continuous integration builds. Nightly builds are run from scratch, including the clone cycle. CI builds will utilize an existing environment where possible (e.g. CI for master). CI builds will also perform a clone if it is a new PR that is being built.

While digging in to the failures, I noticed that it tied to multiple PRs being submitted within a short period of time. Since each build failure occurred as a git clone timeout, I was suspicious of GitHub throttling.

At first I thought we were making too many API requests, but we were well within GitHub's generous limit. I then noticed that their repository was 245MB in size, and became worried about GitHub throttling our downloads. Each new PR triggers three CI builds, which results in 245MB downloads on each server. If multiple PRs are submitted in a short span of time, I could definitely see GitHub cutting off our bits.

Further research led me to this GitHub issue which described a very similar situation, also due to large repo sizes and downloads.

To combat throttling problems with large repositories, I recommend the following settings for each build:

  1. Increase the timeout for clone/checkout operations to give yourself leeway in throttling situations (30-45min)
  2. Enable shallow clone with a depth of 1 to reduce download sizes

By applying these two changes, the intermittent clone failures were eliminated.

Screen Shot 2018-01-30 at 09.24.31.png

Getting Started with the Snapdragon Flight: Driver Development

Earlier this year I was tasked with figuring out how to write a custom device driver for the Snapdragon Flight.

While the process ended up being straightforward, documentation and pointers are largely lacking for the Snapdragon Flight environment. What follows is a summary of the information I learned on my driver development journey. I hope to speed up future driver developers by providing a starting point.

If you're just getting started with the Snapdragon Flight, check out this article for development environment setup and useful resources.

Table of Contents

A Brief Overview

Before we dive into the specifics of implementing drivers on the Snapdragon Flight, we need to have a basic understanding of how the application processor (AP) and digital signal processor (DSP) interact.

Communication between the AP and DSP happens through an RPC mechanism. The default mechanism provided in the Hexagon SDK is FastRPC, which utilizes a serial link between the AP and DSP. A Qualcomm IDL compiler is used to generate function stubs for the AP and DSP. The generated functions are implemented on the DSP side and can be called by the AP side. This IDL/RPC mechanism is the way that your application will interact with drivers and other software running on the DSP.

On the Snapdragon Flight, all access to hardware peripherals is limited to the DSP. There is no direct access to peripherals from the AP. In order to talk to a hardware device from the AP, you must write a device driver that will run on the DSP. The AP-side program can utilize the supplied RPC mechanisms to call DSP functions and retrieve data.

Code intended to run on the DSP must be compiled as a shared library (.so). The DSP libraries are found in /usr/share/data/adsp/ by default. Any shared libraries located in this folder will be loaded and executed on the DSP.

The DSP is running Qualcomm's proprietary QuRT RTOS. You can't access the DSP code directly, but Qualcomm provides a DSP abstraction layer (DSPAL) API. Device drivers and other DSP software will utilize the DSPAL as its base layer.

DriverFramework

The simplest way to start developing your device drivers is to use the DriverFramework project, which is based off of the PX4 DriverFramework. DriverFramework is the approach I took for my own device driver development.

DriverFramework is built upon the Hexagon DSPAL and provides a framework for managing multiple device drivers. The framework is compiled into a shared library that runs on the Hexagon DSP. You can define custom functions in a Qualcomm IDL file, as described above. The DSP library must implement the custom IDL functions. A user application running on the AP can call the functions to interact with our custom drivers.

DriverFramework comes with a few device driver examples that can be used as a reference. Some drivers, such as the BMP280, work on the Snapdragon Flight and can be directly used.

DriverFramework Overview

The main framework classes are:

  • Framework
    • Used to start and stop the driver framework
  • DevMgr:
    • Registers and unregisters device drivers
      • gets and releases DevHandle objects
  • WorkMgr:
    • Used by drivers to:
      • schedule periodic tasks
      • create and destroy WorkHandles
  • DevObj:
    • The base class of all drivers
    • Defines the periodic callback method virtual void _measure()

The DriverFramework core consists of one worker thread (class HRTWorkQueue) that periodically executes the method virtual void DevObj::_measure(), that is implemented by the corresponding device driver to update its data.

The DriverFramework supports two methods for interacting with drivers:

  1. Calling member functions with the C++ driver instance
  2. Accessing the device handle (e.g. /dev/iic-0/baro0) and calling POSIX functions (ioctl, read, write)

The device handle enables you to access the driver via a device path from anywhere in the code, without requiring direct access to the driver instance:

DevHandle h;
DevMgr::getHandle("/dev/gyro0", h); // Starts the driver

SomeDataStruct data[3];
int ret = h.read(data, sizeof(data));
if (ret < 0) {
    printf("Error read failed (%d)\n", h.getError());
}

Device Driver Implementation

The framework provides three base driver classes:

  • VirtDevObj: Provides a base class for simulated drivers
  • I2CDevObj: Provides a base class for I2C drivers
  • SPIDevObj: Provides a base class for SPI drivers

If you're implementing a SPI or I2C device, it should inherit from the base classes above. The base classes provide functions which your driver can use to talk over the I2C or SPI bus.

Higher-level sensor classes are also defined inside of the framework:

  • ImuSensor
  • MagSensor
  • BaroSensor

In order to create your device driver, you need to inherit from one of these base classes (or DevObj at a minimum). For example:

#define I2CMUX_CLASS_PATH  "/dev/i2cmux"

class I2CMux : public I2CDevObj
{
public:
    I2CMux(const char *device_path, uint32_t channels, unsigned int sample_interval_usec) :
        I2CDevObj("i2cMux", device_path, I2CMUX_BASE_PATH, sample_interval_usec), max_ch_(channels)
    {}

// etc…
};

Note the I2CMUX_BASE_PATH argument above. This is the base device path that can be used for accessing the device, such as /dev/iic or /dev/i2cmux. Whenever a device is initialized using a specific base path, the first device is created as /dev/i2cmux0. A second driver initialized with the same base path would be created as /dev/i2cmux1, a third as /dev/i2cmux2, etc.

The device_path argument tells us what our parent device path is. For an I2C Mux, our parent might be /dev/iic-0 or /dev/iic-1.

Each driver must also specify a sample_interval_usec argument. This controls the _measure() function periodicity. The _measure() function is a callback that is scheduled for each driver. For example, every 50ms we want to read from our accelerometer. The sample_interval_usec should be specified as 50000 (usec). Any periodic work that needs to be done should happen in the _measure() function, such as reading from the accelerometer, interpreting the result, and adding it to a queue.

In some cases, such as the I2CMux example above, a periodic callback is not needed. Our mux is only interacted with when the mux channel configuration needs to be changed. In that case, simply specify an empty _measure function:

void I2CMux::_measure()
{
    return;
}

Note that the sample_interval_usec cannot be set to 0, so for devices that don't need the periodic callback, just set it to a large interval.

Each driver must also supply a start() and stop() function. Note that each driver is responsible for starting and stopping its parent instance. Using our I2CMux example, we must manually start our I2CDevObj parent:

int I2CMux::start()
{
    int result = I2CDevObj::start();

    if (result != 0)
    {
        DF_LOG_ERR("error: could not start I2C parent: %d", result);
        return result;
    }

    return DevObj::start();
}

Aside from these basic framework requirements, you can implement member functions as you would with any other C++ class. While the _measure() function is called automatically by the framework, you can supply any particular interface you want through the driver object.

Starting and Stopping Device Drivers

By default, the driver is initialized and started the first time a handle (DevHandle) is opened to the device (if it is not running already). It keeps running when the last handle is released.

However, the use of a handle to access the device is optional. The driver can be explicitly started or stopped using start() or stop(). To manually start a driver, make sure to call init() and then start():

myMux.init();
myMux.start();

To stop the driver, simply call stop():

myMux.stop();

Your driver's member functions will fail if you forget to call start() on your driver or fail to start() your parent class.

Using our DSP Device Driver

The test folder inside of the DriverFramework project shows an example framework application. You can use the test/qurt project as a launch point for your own DriverFramework application.

You can define a QURT_BUNDLE to generate the artifacts for an AP/DSP combo:

QURT_BUNDLE(APP_NAME df_testapp
    APPS_SOURCES df_testapp.c
    DSP_SOURCES
        df_testapp_dsp.cpp
        ../test.cpp
    DSP_LINK_LIBS 
        df_driver_framework
        df_framework_test
        ${df_link_libs}
    DSP_INCS ${CMAKE_SOURCE_DIR}/framework/include
    APPS_COMPILER ${ARM-LINUX-GNUEABIHF-GCC}
    )

Any APPS_SOURCES will be compiled into a binary and loaded into /home/linaro by default.

Any DSP_SOURCES will be compiled into a shared library and loaded to /usr/share/data/adsp by default. You can also link in other libraries using DSP_LINK_LIBS, such as the DriverFramework itself (df_driver_framework) and any drivers you might need (e.g. df_i2cmux or df_bmp280).

The QURT_BUNDLE uses the APP_NAME argument to find a matching IDL file (e.g. df_testapp.idl). This IDL file defines the interface between the AP and DSP:

#ifndef DF_TESTAPP_IDL
#define DF_TESTAPP_IDL

#include "AEEStdDef.idl"

interface df_testapp{
    int32 do_test();
};

#endif /*DF_TESTAPP_IDL*/

In the above file, we create a function called do_test(). This function will be prepended with df_testapp, resulting in a final function df_testapp_do_test(). Our DSP code must implement this function:

int32 df_testapp_do_test()
{
    LOG_MSG("Starting df_testapp");

    return doTest();
}

int doTest()
{
    int ret = Framework::initialize();

    if (ret < 0) {
        DF_LOG_ERR("Framework::initialize() failed");
        return ret;
    }

    DFFrameworkTest df;

    bool tests_ok = df.doTests();

    Framework::shutdown();

    return (tests_ok ? 0 : 1);
}

Our DSP code also needs to declare our driver objects and ensure that the framework is initialized. You can statically allocate drivers, but they must be initialized before use.

// J9 connector -> I2C-2
#define I2CMUX_DEVICE_PATH "/dev/iic-2"

// Parent path, addr, channel count
I2CMux mux0(I2CMUX_DEVICE_PATH, 0x70, 8);
I2CMux mux1(I2CMUX_DEVICE_PATH, 0x71, 8);

Our AP side code can call the IDL functions to interact with the DSP:

int main()
{
    printf("Running DF unit test on DSP\n");
    return df_testapp_do_test();
}

We can supply any number of interfaces between the AP and DSP. Just keep in mind that the DSP side is responsible for managing the device drivers, and the AP side can use the IDL functions to control behavior or retrieve data.

DSPAL

The DriverFramework project comes with a operating model that may not make sense for your purposes. The DSPAL APIs provide you with more direct control for building your own single-driver library or custom driver framework.

The DSP Abstraction Layer (DSPAL) provides a standard interface for porting code to the Hexagon processor. Many familiar POSIX APIs are included, such as pthread, timer, semaphore, and signals. The DSPAL also provides hardware abstractions for:

  • GPIO
  • PWM
  • Serial
  • I2C
  • SPI

Loading Files

Remember that our DSP libraries must be loaded to /usr/share/data/adsp/. AP-side programs can be run from anywhere else, location is not particularly important.

The cmake_hexagon project supplies macros to enable file transfers as part of the build process. These provide a *-load build target, which can be run from the CMake build directory. For example:

cd build_qurt 
make df_custom-load

If you want to manually load files, you can use adb:

adb push driver_framework.so /usr/share/data/adsp
adb push df_custom /home/linaro

Helpful Notes

I ran into quite a few problems while implementing my first drivers on the Snapdragon Flight. Here are some important notes to keep in mind.

Sleeping

We often want to call a function to sleep() or delay() when we're interacting with hardware.

For DriverFramework, the correct call is usleep() (implemented in DSPAL). Time is specified in microseconds.

Hexagon SDK Unsupported Software Features

At the time of this writing, the Hexagon SDK supported by the ATLFlight projects is pretty old. C++11 features are nominally supported, but many are missing.

Check this list if you are running into any problems with missing symbols. This list is not complete, but simply contains the functions that caused me problems.

Missing C++ Features:

  • std::tie (not implemented)
  • std::unique_ptr
  • std::shared_ptr
  • tuple (no header)
  • mutex (no header defined)
  • ifstream (missing function dependencies)
  • ofstream (missing function dependencies)
  • stringstream (missing function dependencies)
  • isnan (Dtest not defined)
  • isinf (Dtest not defined)

Missing C Features:

  • fseek (not defined)
  • ftell (not defined)
  • fputc (not defined - stub defined in elisa.cpp to work with JSON parsing)

Helpful IDL Notes

Always use the type int32 for the return type of your IDL functions. Using a boolean caused RPC memory to not correctly be returned from the DSP to the AP.

The in, inrout, and rout types used in the IDL have special meanings:

  • Declaring a buffer as in results the following behavior:
    1. AP flushes the cache for the buffer
    2. AP makes RPC call
    3. DSP invalidates the cache for the buffer before reading it
  • Declaring a buffer as rout results in the following behavior:
    1. AP makes RPC call
    2. DSP flushes the cache after writing to the buffer
    3. AP invalidates the cache for the buffer before reading it
  • Declaring a buffer as inrout results in the following behavior:
    1. AP flushes teh cache for the buffer
    2. AP makes RPC call
    3. DSP invalidates the cache for the buffer before reading it
    4. DSP updates the buffer and flushes the cache
    5. AP invalidates the cache for the buffer before reading it

Further Reading

Generating GStreamer Pipeline Graphs

I've been working with GStreamer quite a bit recently. I often find it difficult to visualize the pipelines I'm working with, especially for complex pipelines involving multiple video streams. I've been manually creating my own pipeline graphs to keep the details straight. However, maintaining these graphs is time consuming and error-prone.

Recently I discovered that GStreamer has a debugging feature that automatically generates pipeline graphs. The generated graphs are not as beautiful as my painstakingly-created custom graphs, but automatic and instantaneous graph generation wins every time. I also discovered that the GStreamer pipeline graphs reveal hidden elements that are created under the hood, giving me a more comprehensive view of the pipelines I'm working with.

Table of Contents:

  1. Dependencies
  2. Generating GStreamer Pipeline Graphs
    1. GStreamer Application Macros
  3. Converting Pipeline dot Files to PDF
    1. Bulk Conversion Script
  4. An Example Pipeline
  5. Further Reading

Dependencies

Before we can get started, we'll need to cover our dependency situation. I'm assuming you already have GStreamer installed on your system (otherwise this guide is of no use to you).

The only dependency we'll need to install is Graphviz. GStreamer will generate .dot files for our pipeline, and we'll use Graphviz to convert those .dot files into an image or PDF.

If you're on Linux, simply run:

sudo apt-get install graphviz

If you're using OSX, you can install Graphviz using brew:

brew install graphviz

One point to note: the program that is installed with the Graphviz package is called dot, not graphviz.

Generating GStreamer Pipeline Graphs

Regardless of whether you're using gst-launch-1.0 or a GStreamer application, the first thing we'll need to do is define the GST_DEBUG_DUMP_DOT_DIR environment variable. GStreamer uses this environment variable as the output location for the generated pipeline graphs.

You can either define this globally with export:

export GST_DEBUG_DUMP_DOT_DIR=build/pipeline/

Or you can also define it during the application invocation:

GST_DEBUG_DUMP_DOT_DIR=/tmp gst-launch-1.0 {…}

GST_DEBUG_DUMP_DOT_DIR=/tmp ./custom_application

If the directory does not exist, GStreamer will not create it. You'll need to do that on your own.

If you're using gst-launch-1.0, that's all you need to do - pipeline graphs will be generated during every state change.

GStreamer Application Macros

If you're using a custom GStreamer application, you'll need to use GStreamer debug macros to trigger pipeline generation.

For instance, to see a complete pipeline graph, add the following macro invocation at the point in your application where your pipeline elements have been created and linked:

GST_DEBUG_BIN_TO_DOT_FILE(pipeline, GST_DEBUG_GRAPH_SHOW_ALL, "pipeline")

You can use the GST_DEBUG_BIN_TO_DOT_FILE() and GST_DEBUG_BIN_TO_DOT_FILE_WITH_TS() macros to trigger pipeline graph output at desired points.

Pipeline Graph Output

If you're using gst-launch-1.0, a new pipeline graph will be generated on each pipeline state change. This can be especially helpful if you want to see how your pipeline evolves during caps negotiation.

Here's a list of files that were generated by gst-launch-1.0 using the example pipeline:

0.00.00.328088000-gst-launch.NULL_READY.dot
0.00.00.330350000-gst-launch.READY_PAUSED.dot
0.00.02.007860000-gst-launch.PAUSED_PLAYING.dot
0.00.05.095596000-gst-launch.PLAYING_PAUSED.dot
0.00.05.104625000-gst-launch.PAUSED_READY.dot

If you are using a custom GStreamer app, pipeline files will only be triggered based on your invocation of the GST_DEBUG_BIN_TO_DOT_FILE() macros. Perhaps a single pipeline graph will be generated if you follow the recommendation above. Multiple graphs will only be generated if you invoke the macro multiple times.

Converting Pipeline dot Files to PDF

Now that we have the generated pipeline graphs, we need to convert them to a graphical format.

We'll use the dot command like this, and the general form will be:

dot -T{format} input_file > output_file

I like to use PDFs for my pipeline graphs, as I can actually zoom in and read the text without artifacts. To convert our pipeline graphs to PDF files, I use the following command:

dot -Tpdf 0.00.05.104625000-gst-launch.PAUSED_READY.dot > pipeline_PAUSED_READY.pdf

Graphviz supports a variety of output types, so don't feel constrained to a PDF! Select the type that works best for you.

Bulk Conversion Script

Since I often use gst-launch-1.0 for pipeline testing, I want to convert pipeline graphs in bulk. Here's a script that I use to convert all pipeline files in a directory:

#!/bin/sh

MESON_BUILD_ROOT="${MESON_BUILD_ROOT:-build}"
INPUT_DIR="${INPUT_DIR:-$MESON_BUILD_ROOT/pipeline}"

if [ -d "$INPUT_DIR" ]; then
    DOT_FILES=`find build/pipeline -name "*.dot"`
    for file in $DOT_FILES
    do
        dest=`sed s/.dot/.pdf/ <<< "$file"`
        dot -Tpdf $file > $dest
    done
else
    echo "Input directory $INPUT_DIR does not exist"
fi

You can eliminate MESON_BUILD_ROOT in your own script and supply your own INPUT_DIR. If you prefer a different file type, simply change the -Tpdf argument and the .pdf portion of the sed command.

An Example Pipeline

Here's an example GStreamer pipeline and a resulting pipeline graph.

I use the pipeline below to test changes to the framerate plugin that I am working on. In order to generate pipeline graphs, I added GST_DEBUG_DUMP_DOT_DIR to the gst-launch-1.0 invocation:

GST_DEBUG_DUMP_DOT_DIR=$MESON_BUILD_ROOT/pipeline gst-launch-1.0 -e --gst-plugin-path=$MESON_BUILD_ROOT \
    videotestsrc is-live=1 \
    ! 'video/x-raw, format=(string)I420, framerate=(fraction)15/1, width=(int)480, height=(int)360' \
    ! framerate passthrough = 1\
    ! 'video/x-raw, framerate=(fraction)30/1' \
    ! x264enc \
    ! 'video/x-h264, stream-format=(string)byte-stream' \
    ! h264parse \
    ! matroskamux \
    ! filesink location=$MESON_BUILD_ROOT/test.mkv

GStreamer produces five pipeline graphs for this plugin, covering the five state changes:

0.00.00.328088000-gst-launch.NULL_READY.dot
0.00.00.330350000-gst-launch.READY_PAUSED.dot
0.00.02.007860000-gst-launch.PAUSED_PLAYING.dot
0.00.05.095596000-gst-launch.PLAYING_PAUSED.dot
0.00.05.104625000-gst-launch.PAUSED_READY.dot

Here's our rendered PAUSED_READY.dot pipeline: