Real-World Portable Driver Examples

In the previous article in the Practical Architecture series, we discussed the use of “virtual devices” in our embedded software, an idea that dates back to at least 1981. Instead of dealing with the device drivers and processor SDKs directly, we can create abstract interfaces that describe the functionality provided to the application by the underlying hardware. We can then separate our code into “hardware-dependent” modules that implement these abstract interfaces and “hardware-independent” modules that interact with the hardware through the abstract interfaces.

In this article, we’ll take a look at real world examples of these ideas.

Demonstrating Portability

As part of the initial Embedded Virtual Machine release, we built a demo application that runs on three separate systems:

  • OS X (using an Aardvark I2C Adapter)
  • nRF52840 Development Kit
  • STM32L4R5 Nucleo Development Kit

Each of the supported platforms uses the same two I2C peripherals and their associated drivers:

Our Portable Application

The demo application contains a single source file: main.cpp.

The relevant part of the example application’s main() function is the while loop that invokes read() on the I2C sensor.

// Get the abstract interface for the ToF sensor; returns an optional<>
auto tof = platform.findDriver<embvm::tof::sensor>();
// Abort the program if the device is not found
assert(tof);

while(!abort_program_)
{
	// Trigger a sensor read
	// Note that no action is taken here - specific responses to this
	// read are handled via callbacks
	tof.value().read();
	std::this_thread::sleep_for(DEFAULT_TOF_READ_DELAY);
}

The first line shown above uses our “Driver Registry” concept to access the abstract interface for the time-of-flight sensor at the application level. The application itself is not able to access the specific instance of the driver populated on each platform. It is restricted to using abstract interfaces. We will explore the Driver Registry further in an upcoming article.

If the driver has been found, we invoke the read() interface on the driver. We prefer asynchronous embedded systems implementations, so you will see that read() does not return a value. Instead, values are reported to interested users through callbacks, which are registered through another abstract interface (registerReadCallback).

Platform Makeup

This application is designed to run on multiple platforms. In this case, the platforms are defined separately from the application. We’ll discuss the framework layering in a future article, but for now we are only interested in the driver mechanisms. Particularly, in our case, for hardware initialization and registering callbacks, which happens in the platform layer for this application.

Why not register callbacks in the application? For this demo, registering the callbacks provides us with a consistent application behavior (“read from a time-of-flight sensor until we are told to abort”) while mapping that output to different actions for different platforms if necessary (e.g., print to a screen, print to serial console, print to screen and serial console).

For example, the Nucleo platform will initialize the hardware platform (configuring the hardware as that board requires), then it will connect the OLED display and the time-of-flight sensor together by registering a callback for the read() function that updates the display with the observed distance. The other platforms operate similarly in this regard.

void NucleoL4RZI_DemoPlatform::initHWPlatform_() noexcept
{
	hw_platform_.init();

	auto& tof0 = hw_platform_.tof0_inst();
	auto& screen0 = hw_platform_.screen0_inst();

	tof0.registerReadCallback([&](uint16_t v) {
		snprintf(tof_string_, 32, "ToF Range: %u mm\n", v);
		screen0.clear();
		screen0.printString(0, 0, tof_string_);
		screen0.printString(0, 32, tof_mode_string_);
		screen0.display();
	});
}

The platform layer implementation for each target platform looks quite similar, because each platform is designed to support the end application. The major differences occur in the hardware platform layer, since each platform uses a different processor and different I2C master driver.

For example, our Nucleo hardware platform uses an I2C driver with DMA support, so it must be initialized with DMA channel configurations. Then the display and time-of-flight drivers are initialized with the I2C master peripheral.

STM32DMA dma_ch_i2c_tx{STM32DMA::device::dma1, STM32DMA::channel::CH1};
STM32DMA dma_ch_i2c_rx{STM32DMA::device::dma1, STM32DMA::channel::CH2};
STM32I2CMaster i2c2{STM32I2CMaster::device::i2c2, dma_ch_i2c_tx, dma_ch_i2c_rx};

embdrv::ssd1306 screen0{i2c2, SPARKFUN_SSD1306_ADDR};
embdrv::vl53l1x tof0{i2c2, SPARKFUN_VL53L1X_ADDR};

The nRF52 hardware platform doesn’t use a DMA capable driver. It does, however, use an active object wrapper for the I2C driver, which places the device on its own thread that pulls from an event queue. The OLED display and time-of-flight sensor drivers are mapped to the active object wrapper, which satisfies the same abstract I2C interface.

i2c0_t i2c0_private_;
embvm::i2c::activeMaster<128> i2c0{i2c0_private_};

embdrv::vl53l1x tof0{i2c0, SPARKFUN_VL53L1X_ADDR};
embdrv::ssd1306 screen0{i2c0, SPARKFUN_SSD1306_ADDR};

The simulator hardware platform uses a completely different setup altogether: the Aardvark I2C/SPI USB debug adapter.

embdrv::aardvarkAdapter aardvark{embdrv::aardvarkMode::GpioI2C};
embdrv::aardvarkI2CMaster i2c0{aardvark};

embdrv::vl53l1x tof0{i2c0, SPARKFUN_VL53L1X_ADDR};
embdrv::ssd1306 screen0{i2c0, SPARKFUN_SSD1306_ADDR};

Note that the same drivers are used for all three platforms. All that changes is the specific I2C peripheral instance that is passed to the device during construction. Even with drastically different I2C driver implementations, the time-of-flight sensor driver still behaves as expected. Since our display and sensor classes are designed to use a reference to appropriate base class (which defines the abstract interface), the driver doesn’t need to change to work with different I2C peripherals.

Demonstrating Portability

Of course, showing a bunch of code is pointless if we can’t see it doing the right thing in practice!

Here is the demo program running on my personal computer. I use an Aardvark adapter to talk to the peripheral components.
Here is the demo program running on an STM32L4 Nucleo board.
Here is the demo program running on an nRF52 DK board.

Another Practical Use for Virtual Devices

Even if you’re not concerned about insulating your software against (inevitable, in our opinion) changes in the underlying hardware, we promote the use of virtual devices and abstract interfaces for another practical reason: you can develop and test new device drivers on your personal computer, moving them over to the target hardware once the driver is completed. In most cases, we require no changes to the driver once we’ve ported it over.

We prefer this approach to developing new peripheral drivers because working on our personal computers allows us to move much faster and provides us with more powerful debugging and testing tools. We can identify problems much faster using these debugging tools, and we can build test suites for our peripheral drivers that actually talk to the peripheral hardware. In fact, that’s exactly how we initially tested the time-of-flight and OLED display drivers used in the demo application.

If you’re interested in this approach, you will need to take the following generalized steps:

  1. Define an abstract interface for a given communication bus (e.g., I2C, SPI, CAN)
  2. Select a USB debug adapter that supports your target protocol and provides an API to work with the device
  3. Write a driver for your debug adapter that implements the abstract interface
  4. Write new peripheral drivers so that the use the abstract interface:
    1. Create a constructor or initialization function that takes in the abstract interface for the communication protocol
    2. Construct a simple test program that initializes your new driver with the debug adapter instance
    3. Iterate through test and development until the driver is completed
  5. Port your new driver to the target hardware and verify everything functions correctly

With this approach, you can bring up and evaluate new components using development kits and modules. You can also prototype other system code on your personal computer by connecting different peripherals on a breadboard to represent a subsystem. There is no reason to wait for your company’s custom PCB design to arrive in house before you can start working on embedded software!

Up Next: Improving Upon this Idea

What we’ve shown above works great: we have portable device drivers that can run on any system that supplies the necessary interface implementations. However, our drivers are still tightly coupled to the underlying ecosystem, which in this case is the Embedded VM. For software developed by your company, this isn’t a problem: your company will continue to refine and reuse the abstract interfaces you’ve defined.

For building drivers that can be reused on a wide scale, however, this is still a limited approach, as our driver depends on a specific interface in order for it to function. If you want your sensor driver to be usable across different I2C interfaces, we need to take a different approach.

In the next article, we will take a different approach to associating device drivers. Our approach will be similar to that taken by the the C-based radio driver that we reviewed in the past.

Following that article, we will take another approach for accessing virtual devices from within hardware-independent modules.

Further Reading

Designing Embedded Software for Change

Are you tired of every hardware or requirements change turning into a large rewrite? Our course teaches you how to design your software to support change. This course explores design principles, strategies, design patterns, and real-world software projects that use the techniques.

Learn More on the Course Page

Share Your Thoughts

This site uses Akismet to reduce spam. Learn how your comment data is processed.