April 2018: C & C++ Libraries We Like

Welcome to the April 2018 edition of the Embedded Artistry Newsletter! This is a monthly newsletter of curated and original content to help you build better embedded systems. This newsletter is intended to supplement the website and covers topics not mentioned there.

This month we'll cover:

  • Seven exceptional C and C++ projects to consider for your next product
  • Interesting links from around the web
  • Embedded Artistry website updates and popular posts

Open Source Projects to Consider For Your Next Product

I’ve selected seven exceptional C & C++ projects that I want to share with you. Integrating high-quality libraries in our code base enables us to take new approaches, use safer techniques, or simply reduces the amount of code we need to write. I hate reinventing the wheel unless there’s a compelling reason, so I like to spend time researching libraries before diving into development. I hope you can find something to benefit your next product.

C++ Projects

I primarily focus on developing new embedded systems in C++. The language provides features that allow me to write safer code and to detect errors in compilation rather than at runtime. Of the seven project we’ll review today, 5 of them are C++:

  • NamedType
  • foonathan/type_safe
  • foonathan/memory
  • POCO C++ Libraries
  • Safer C++

NamedType

NamedType is a library written by Jonathan Boccara, a C++ developer and author of the FluentC++ blog. The NamedType library provides a simple interface for using strong types, which are not natively supported in C++.

We often utilize native types, such as int or double, to represent the values in our software. The general nature of these types means that we can easily make mistakes in our code. Consider a simple rectangle constructor:

Rectangle(double width, double height);

If you swapped the width or height in your code, the compiler would never be able to tell you.

double width = 10.5;
double height = 3.0;
Rectangle r(height, width); // wrong but compiles!

By using strong types in our interfaces, we can make our APIs more explicit and rely on the compiler to catch any mistakes:

// Create our strong types
using Width = NamedType<double, struct WidthTag>;
using Height = NamedType<double, struct HeightTag>;
//…
Rectangle(Width width, Height height); // new constructor
//…
Rectangle r(Height(3.0), Width(10.5)); // compiler error - type mismatch!
Rectangle r2(Width(10.5), Height(3.0)); // ok

The great news is that strong types are zero-cost when compiling with -O2 (or -O1 with GCC). It's time to make your APIs more explicit and catch errors during compilation.

For more on NamedType and strong types in C++:

type_safe

The type_safe library is developed by Jonathan Müller, a C++ library and author of foonathan::blog(). The type_safe library provides zero-overhead utilities to help catch bugs at compile time.

Features provided by the type_safe library include improved built-in types (ts::integer, ts::floating_point, and ts::boolean) which prevent dangerous implicit operations like signed-to-unsigned promotion. The type-safe library also provides "vocabulary" types to help us write more expressive code, such as object_ref (non-null ptr), array_ref (reference to an array), and function_ref (reference to a function). Other interesting concepts are also provided, such as deferred_construction, a wrapper which allows you to create an object without constructing.

Similar to NamedType, this library also supports strong types. Where NamedType provides a simple interface for creating strong types, the type_safe library requires explicit declaration of the attributes our strong type supports. The increased overhead for setting up new types is worth the safety provided by explicitly deciding what operations should be allowed. In the example below, our strong type only has addition and subtraction operations enabled:

struct meter
: strong_typedef<meter, int>, addition<meter>, subtraction<meter>
{
    using strong_typedef::strong_typedef;
};

Use the type_safe library to write expressive code and increase the number of errors caught during compilation.

For more on type_safe:

memory

The memory library is also developed by Jonathan Müller. This library provides an new STL-compatible C+ memory allocator called RawAllocator. The RawAllocator is similar to the standard Allocator but is easier to use. The library also provides a BlockAllocator type which can be used for allocating large blocks of memory.

The project includes a variety of implementations, adapters, wrappers, and storage classes, including:

  • new allocator
  • heap allocator
  • malloc allocator
  • memory pools
  • static allocator
  • virtual memory allocator
  • make_unique and make_shared replacements which allocate memory using a RawAllocator

We are excited about using this library in our next embedded project and gaining increased control over memory allocations.

For more on the memory library:

POCO C++ Libraries

The POrtable COmponents (POCO) C++ Libraries is a collection of C++ class libraries whose goal is to simplify the development of network-centric applications. POCO is similar in concept to the Java Class Library, .NET Framework, or Cocoa. The POCO libraries are highly portable, allowing you to easily compile and run your application on multiple platforms.

The full list of features provided by the POCO libraries are too numerous to list here. A small sampling of features:

  • Caching framework
  • Cryptographic & hashing libraries
  • Logging framework
  • HTTP server & client
  • SSL/TLS support through OpenSSL
  • POP3 & SMTP clients
  • SQL database access
  • Multithreading (basic threads, synchronization, thread pools, active objects, work queues)
  • Stream classes for Base64 and binary encoding/decoding, compression (zlib), line ending conversion
  • XML parsing & generation
  • Zip file manipulation

While the POCO libraries provide portability and a plethora of features, my favorite aspect is the emphasis on code quality, style, consistency, and readability.

I highly recommend reviewing the POCO libraries before beginning your next project - you can find a gem (or twenty) that will save you development time.

For more on POCO:

SaferC++

The SaferC++ library provides safer implementations for many native C++ types. The library provides features such as:

  • Data types that are designed for multi-threaded use and asynchronous access
  • Drop-in replacements for std::vector, std::array, std::string, std::string_view that provide improved memory safety
  • Drop-in replacements for int, size_t, and bool that protect against use of uninitialized values and sign-unsigned comparison issues (similar to type_safe)
  • Improved pointer & reference types with different compatibility and performance tradeoffs

SaferC++ is usable with embedded systems as long as your platform has a functional STL implementation. Exception behavior can be controlled for your platform by modifying the MSE_CUSTOM_THROW_DEFINITION macro.

Using the library does incur a performance penalty. However, SaferC++ elements can be disabled during compile time (i.e. replaced with the standard type equivalents). This allows users to enable debug and test builds to use safer-but-slower features without adding overhead to release builds.

Since the SaferC++ types provide added safety and can be disabled when performance matters, I highly recommend using their drop-in types to catch and eliminate possible errors when using STL types. The easiest way to get started with SaferC++ is to utilize the mse::vector and mse::array types in place of std::vector and std::array. These types will help you catch potential memory issues lurking in your software. The README provides further tips for making your code safer.

For more on SaferC++:

C Projects

While we’ve been heavily focused on C++-based embedded systems development for the past few years, I did find two exciting C projects that I want to share with you:

  1. CException, a lightweight exception library
  2. Checked C, an extension to the C language which provides better protections against common memory errors

CException

CException is a project released by Throw The Switch. CException is designed to provide simple exception handling in C using the familiar try/catch/throw syntax. I've been recently thinking about the downsides of error logging, so I am excited to see a lightweight exception library for C.

The exception implementation is kept simple by only allowing you to throw a single error code. No support for throwing objects, structs, or strings is included. The library can be configured for either single-tasking or multi-tasking which makes this project a good fit for embedded systems using an RTOS. CException is implemented in ANSI C and is highly portable. As long as your system supports the standard library calls setjmp and longjmp, you can use CException in your project. If you're looking for an exception library to use on embedded systems, CException is for you.

For more on CException:

Checked C

Checked C is a research project from Microsoft which adds static and dynamic (runtime) checking for errors such as buffer overruns, out-of-bounds memory accesses, and incorrect type casts.

The project is implemented as an extension to the C language. New pointer and array types are provided with the goal of allowing programmers to better describe intended pointer use and the range of memory that is pointed to. Developers can select between types with and without bounds checking, as well as between types that can or cannot be used in pointer arithmetic.

Since Checked C is an extension to the C language, you will need a compiler that supports it. Microsoft provides a port of clang and LLVM that support the extension.

CheckedC can help you identify and eliminate common memory errors which plague us as C & C++ developers. Even better, existing C programs compiled with a Checked C compiler will continue to work. Raw pointers (e.g. int *) remain unchecked and pointer arithmetic is still allowed.

For more on Checked C:

Around the Web

In the September 2017 newsletter I shared a series of posts detailing the engineering behind the Voyager missions. If you liked those articles, Voyager Mission Telecommunication Firsts presents another take on the outstanding engineering achievements of the Voyager missions.

If you enjoyed last month's focus on development processes, I recommend reading Making Valgrind Easy. By integrating Valgrind into your static analysis process, you can find and fix memory issues that your compiler or static analyzer won't catch.

I came across this article written by Michael Barr in 2009, titled Firmware Architecture in Five Easy Steps. Read this article before starting your next embedded project.

Jonathan Müller of foonathan posted Guidelines for Rvalue References in APIs. This article is recommended for advanced C++ developers and library authors.

Website Updates

Our "About" information has been condensed into a single page. We've doubled down on keeping the primary website focused on embedded systems content. We've created a new website dedicated to our consulting business.

A classic (and free!) introductory embedded systems book, Programming Embedded Systems, was added to the Beginners page. While the examples are slightly dated, the concepts are valid and can be applied to modern embedded systems.

The Glossary saw new additions, including LVDS, PLM, and MIPI standards.

The old "Open Source Software" page has been merged into the Libraries page.

New Articles

The Dark Side of Error Logging was published as a guest post on Arne Mertz's Simplify C++ blog.

These posts were published on our website in March:

  1. Improving Our Software With 5 Lightweight Processes You Can Adopt This Month
  2. Getting Started with the Snapdragon Flight: Driver Development
  3. Seeing Intermittent GitHub Clone Failures on Jenkins? Check Your Repo Size
  4. Safely Storing Secrets in git

These were the most popular articles in March:

  1. Circular Buffers in C/C++
  2. C++ Casting, or: Oh No, They Broke Malloc!"
  3. Installing LLVM/Clang on OSX
  4. std::string vs C-strings
  5. Jenkins: Configuring a Linux Slave Node
  6. An Overview of C++ STL Containers
  7. Implementing Malloc: First-fit Free List
  8. Demystifying ARM Floating Point Compiler Options
  9. Jenkins: Running Steps as sudo
  10. Creating and Enforcing a Code Formatting Standard with clang-format

Thanks for Reading!

Have any feedback, questions, suggestions, interesting articles, or resources to recommend to other developers? Simply reply to this email!

While you wait on the next edition, check out the website or follow us on Twitter

Happy hacking!

-Phillip

March 2018: Lightweight Processes to Improve Quality

Welcome to the March 2018 edition of the Embedded Artistry Newsletter! This is a monthly newsletter of curated and original content to help you build better embedded systems. This newsletter is intended to supplement the website and covers topics not mentioned there.

This month we'll cover:

  • 5 methods for improving software quality that you can implement this month
  • The keys to success when adopting new processes
  • Interesting links from around the web
  • Embedded Artistry website updates and popular posts

Improving Our Software With 5 Lightweight Processes You Can Adopt This Month

It's that time of year when the Barr Group releases their yearly Embedded Systems Safety & Security Survey results. Last year's results were eye opening, as nearly 50% of respondents reported not using static analysis and 36% reported that they do not perform code reviews. The 2018 results were no better:

  • 38% of safety-critical products don't comply with a formal safety standard
  • 43% of teams working on safety critical products don't perform code reviews
  • 41% of of teams working on safety critical devices don't perform regression testing (54% for IoT product teams)
  • 33% of teams working on safety-critical products don't perform static analysis (49% for IoT product teams)

These numbers are even more alarming given the fact that 25% of the reported "internet-connected devices" could kill or injure people if hacked. 22% of respondents mentioned that security for connected devices wasn't even on their to-do list. We're becoming increasingly connected, but our standards for safety, testing, and verification are not keeping pace. I want to be clear: this is not acceptable.

Many teams skip crucial development processes and justify them with scheduling pressure or by blaming the boss. There will always be scheduling pressure, so we must adjust our approaches or we will continue to flounder. Bugs are expensive in time, money, and morale. Debugging accounts for 50% of most project schedules. We all make mistakes. Anything we can do to keep bugs out of our code or catch them as early as possible will save money and time.

I've selected five simple processes to improve quality that your team can adopt over the next month. These processes are cheap or free, apply across languages and platforms, and best of all - they work. While each process requires a bit of time to get up and running, there is little-to-no maintenance involved in continuing to use them.

Here are five lightweight processes for improving code quality and identifying problems early:

  1. Fix all of your warnings
  2. Set up a static analysis tool for your project
  3. Measure and tackle complexity in your software
  4. Create automated code formatting rules
  5. Have your code reviewed

Fix All of Your Warnings

The first thing I do when working on a new project is fix all of the compiler warnings. It's amazing to me how developers will ignore warnings or rationalize their presence. Occasionally you even run into a team who will fight tooth and nail to prevent you from fixing them!

The compiler knows the programming language better than you ever will. You should not ignore the compiler when it is alerting you to an issue. The way you are using the language is dangerous and likely has unintentional side effects. Depending on the warning, you might be introducing undefined behavior into your software. That's not our idea of quality software.

If you have warnings in your code base, fixing them is one of the fastest ways to improve quality. You will fix bugs and flaws in your program, regardless of whether or not they are currently problematic.

For more on compiler warnings:

Set Up Static Analysis Support

Static analysis tools provide us with even better feedback than the compiler. Your compiler will happily allow some problematic cases which are legal in language, such as out-of-bounds pointer accesses or missing initialization values. Your analyzer will catch these problems, and also report red flags such as unused or redundant code. When used throughout the development cycle, your static analysis tool can help you catch & prevent latent problems even earlier than your testing cycle.

Some governmental and industrial organizations are starting to require static analysis data for certification processes. Companies such as PRQA provide tools that can check for compliance with safety critical standards in a variety of industries.

There are many free static analysis tools available and their commercial counterparts are also inexpensive (most are less than $1000). At Embedded Artistry, we use Clang's static analyzer alongside clang-tidy.

Here are some resources you can use to find a static analysis tool that fits your needs:

Measure and Tackle Complexity

By the time you've eliminated warnings on your project and cleaned up glaring problems exposed by your static analysis tool, you've already made significant progress with software quality. The next goal is to measure complexity in your software. Because highly complex functions tend to be hard to understand, test, and maintain, these functions are prime candidates for refactoring and simplification.

By using a metric to measure complexity, we have a quantitative way to evaluate our code and identify pieces that need special attention. We can see how our changes are impacting the code base over time and trigger automatic alerts and reviews whenever a threshold is exceeded. We can focus our code reviews on functions with high complexity scores, making sure they get the most of our limited attention. Metrics aren't perfect, but they increase our insight into our software quality.

These are the simplest and most popular metrics for measuring code complexity:

  • Lines of code (LOC): a count of the non-blank, non-comment source lines in a function, module, or project
  • McCabe cyclomatic complexity (MCC): provides a complexity score based on the number of branches (e.g. conditional statements)
  • Strict cyclomatic complexity (SCC, CC2): expands MCC by considering the number of conditions within each branch, which provides an approximation for the number of test cases needed for full coverage

These free tools will calculate complexity metrics for C/C++. We currently use Lizard at Embedded Artistry.

For more on software complexity:

Create Auto-formatting Rules

Automated code formatting might seem like a strange recommendation to put into the top five, but it serves three purposes:

  1. Automated formatting reduces a programmer's cognitive load by eliminating an entire category of details and decisions they need to keep in mind
  2. Automated formatting improves the quality of our peer code reviews (the next recommendation) by eliminating arguments about style
  3. Automated formatting is the first step toward implementing and enforcing a coding standard

Every team that I've encountered with a written style guide inevitably ignores those guidelines, and multiple programming styles run rampant. Instead of relying on developers to constantly keep an arbitrary set of rules in mind, we can automate the process to make it simple and impersonal. At the very least, it's worth eliminating the pointless, time-wasting arguments that cause friction within our teams.

We use clang-format on Embedded Artistry projects. Uncrustify and Astyle are other popular code formatting tools.

For more on automated code formatting:

Have Your Code Reviewed

Writers accept the fact that first drafts are generally garbage and need heavy editing. Before I send this newsletter out into the world, it has usually gone through 2-3 self-editing sessions and 1-2 peer reviews. Along the way, the newsletter is trimmed and restructured. The result is a much better product than the initial draft.

Yet, for some reason, programmers seem to think that perfect code is produced on the first try. The 2018 Barr Group survey results showed that 54% of IoT product teams don't perform regular code reviews. The survey results also show that a painfully scary 43% of teams working on safety-critical software don't perform regular reviews.

Perfect code on the first try might be possible if you're a prodigious programmer. But remember: even Hemingway had an editor. A second set of eyes can identify flaws that you missed in your first pass. Another developer may have different experiences that provide insight into the merits or risks of your approach. The architect on your team probably has input on how a module should interface with other pieces of the program. The "ego effect" also comes into play: knowing that our code will be presented and reviewed by another human can dramatically improve the overall quality. We will spend time cleaning up and checking the logic before putting code up for judgment.

Code reviews can waste time and become unproductive if poorly implemented. Best Practices for Peer Code Review provides some excellent tips for getting started. Notably, a lightweight review process is more efficient and practical than long, in-depth reviews with multiple developers. Even performing reviews on only 20-33% of the submissions provides benefits due to the "ego effect". While 20% may seem low, remember that we are aiming for achievable: reviewing 20% of the source code is definitely better than none.

I highly recommend implementing peer code reviews after setting up automated code-formatting. This helps constrain code review discussions by preventing them from devolving into style nit-picking. If you've set up static analysis, make sure the tools are used prior to code reviews.

For more information on code reviews:

The Keys to Success When Adopting a New Processes

When adopting new processes, it's important to focus on one at a time. Adopting new processes in stages ensures that you have time to correctly implement each new technique before moving on to the next one. By implementing too many changes at once, you are likely to overwhelm your team and evoke a mutiny.

If you're leading a team, it helps to find someone who is excited and can help you champion the idea. Empower that person so they can demonstrate the benefits of the new process to your team. Back them up when there is pushback. Change is always hard. Expect the resistance, but don't let it stop you.

The key to making new processes stick is to make them as automated as possible. There is never a case where it isn't worth the time it takes to automate a development process. Automation ensures that the process is easy to follow and always happens, rather than trusting individual contributors to remember to follow a process. Automation also makes the process less personal. The rules are clearly defined and are being enforced by a tool. Depersonalization helps us view the situation dispassionately, rather than as an attack on our abilities.

To recap, when you implement new processes:

  1. Adopt one new process at a time
  2. Empower a process champion on your team
  3. Automate!

Around the Web

BMW has issued a recall for 11,700 engines because they flashed the wrong firmware onto them.

In 2007, doctors replacing Dick Cheney's heart defibrillator ordered the manufacturer to disable its wireless capabilities out of concern for a hacker being able to trigger a fatal heart shock. We need to take the security of our connected devices, especially medical ones, seriously.

IT Hare published an extremely interesting analysis of operation costs in CPU clock cycles. Their data provides a great source of ballpark numbers for the relative computational costs of popular operations.

Michael Barr shared the 2018 survey results for programming languages used to develop embedded systems. C remains the main player with C++ in a distant second.

Website Updates

We've added two great introductory embedded systems resources by Embedded.fm to the For Beginners page. If you're just starting out with embedded systems, check out Embedded Wednesdays and Embedded Software Engineering 101.

The Glossary has been expanded with additional embedded systems terms.

The Software References page has been updated with a TLA+ reference for those interested in verifying that their algorithms work correctly.

Articles Published in February

These posts were added to the website in February:

  1. Implementing std::mutex with ThreadX
  2. Implementing std::mutex with FreeRTOS
  3. Refactoring the ThreadX Dispatch Queue to Use std::mutex
  4. Code Cleanup: Splitting Up git Commits in the Middle of a Branch
  5. Generating GStreamer Pipeline Graphs

These were the most popular articles in February:

  1. Circular Buffers in C/C++
  2. Installing LLVM/Clang on OSX
  3. C++ Casting, or: Oh No, They Broke Malloc!"
  4. std::string vs C-strings
  5. An Overview of C++ STL Containers
  6. Implementing std::mutex With ThreadX
  7. Implementing Malloc: First-fit Free List
  8. Creating and Enforcing a Code Formatting Standard with clang-format
  9. Implementing an Asynchronous Dispatch Queue
  10. A Simple Consulting Services Agreement

Thanks for Reading!

Have any feedback, questions, suggestions, interesting articles, or resources to recommend to other developers? Simply reply to this email!

While you wait on the next edition, check out the website or follow us on Twitter

Happy hacking!

-Phillip

February 2018: Spectre and Meltdown

Welcome to the February 2018 edition of the Embedded Artistry Newsletter! This is a monthly newsletter of curated and original content to help you build better embedded systems. This newsletter is intended to supplement the website and covers topics not mentioned there.

This month we'll cover:

  • Two vulnerabilities that have rocked the computing world: Spectre and Meltdown
    • An overview of speculative execution
    • The Spectre vulnerability
    • The Meltdown vulnerability
    • What you can do today
    • Reliable sources for more information
  • Interesting links from around the web
  • Articles Published in January
  • Most Popular articles in January

A Tale of Two Vulnerabilities: Spectre and Meltdown

The new year did not get off to a great start for the electronics industry. A design flaw in modern processor architectures was exposed with the announcement of two critical vulnerabilities: Spectre and Meltdown. Both attacks are based on speculative execution, a technique in modern process design used to increase performance by pre-loading memory and future CPU instructions. Processor design has been largely focused on improving performance, and unfortunately the security implications of these improvements were not questioned. Spectre and Meltdown show that attackers can exploit speculative execution to access arbitrary memory locations.

Collectively, Spectre and Meltdown affect most processors that are on the market today. While Meltdown is a potentially patchable issue affecting most Intel and some ARM processors, Spectre can only be fully resolved by re-thinking our processor designs.

An Overview of Speculative Execution

Modern processors utilize instruction pipelines, effectively breaking up incoming instructions into sequential stages to keep the processor continually busy. Each pipeline stage handles its part of the current instruction, passes the result to the next stage, then handles the next instruction. This means that a processor effectively is working on multiple instructions in parallel.

The problem with instruction pipelines arises when a conditional branching decision is encountered. Consider the following simple statement:

if(x < 10)
    A();
else
    B();

The condition if(x < 10) must be evaluated before the processor knows whether it needs to call function A() or function B(). The pipeline stalls until the correct path is chosen, and valuable computational cycles are wasted.

Speculative execution is an optimization technique that tries to prevent those wasted computational cycles. Instead of stalling, the processor executes beyond the branch point (e.g. by calling A()) before it knows whether the branch will be taken. If the speculation was correct, then we gained the advantage of not wasting any computational cycles. If it was incorrect, the CPU discards the resulting state and continues executing on the correct path.

Note that the "speculated state" is not discarded until the correct execution path is known. This opens up the mis-speculation window, the time in which the CPU has speculatively executed the wrong code while not detecting that a mis-speculation has occurred.

Compounding with the mis-speculation window is caching, another performance optimization. When data is loaded into memory, caches are updated. If we need to refer to that memory again in the future, we can reduce the access time by fetching that information from the cache instead.

However, once the memory state is rolled back after a mis-speculation is detected, the data for the speculated state is still present in the cache. Clever attackers can take advantage of these speculative execution breadcrumbs to execute code and access memory that is not normally accessible.

There are currently three known variants of speculative execution vulnerabilities:

  1. Bounds check bypass (Spectre)
  2. Branch target injection (Spectre)
  3. Rogue data cache load (Meltdown)

At a high level, the Spectre exploits can be used to trick processors into running instructions they should not have run, granting access to arbitrary memory from another program's memory space. Meltdown applies primarily to Intel processors and can be used to access protected kernel memory in user space.

The Spectre Vulnerability

Spectre's name reflects the vulnerability's root cause (speculative execution) as well as the fact that it will haunt us for years to come. If your processor utilizes speculative execution techniques, you are probably affected. This includes the Intel, AMD, and ARM processors that power our personal computers, smart phones, and cloud servers. There are two primary variants of the Spectre vulnerability: branch target injection and bounds check bypass.

The branch target injection method relies on influencing how a processor's branch predictors operate. By influencing the branch predictors an attacker can control the speculative execution path and ensure that the malicious code is speculatively executed.

The bounds check bypass method takes advantage of speculative execution that occurs while the processor is checking if the targeted memory location is in-bounds. The processor will speculatively access out-of-bounds memory before the bounds check resolves. The attacker can read normally inaccessible memory by using a bounds check bypass combined with an intermediary program or module with better memory access privileges.

The end result is the same: processors inadvertently grant access to arbitrary memory from another program's memory space by running speculative instructions. After the code has been speculatively executed, an attacker can use the caches to reconstruct the target data.

The Meltdown Vulnerability

Meltdown is named for the fact that it "melts" normally enforced security boundaries between user memory and kernel (system) memory. Meltdown affects Intel, Qualcomm, and some ARM cores. Since Intel chips form the basis of many server platforms, most of the cloud service providers are affected by Meltdown.

Meltdown is only possible on processors which allow speculative execution across "privilege boundaries", such as the boundary which separates kernel memory from normal user programs. Normally, direct kernel memory access from user space is expected to fail with a page fault access error. However, certain processors might speculatively access the protected memory and use it for subsequent instructions prior to finishing the permissions check. If the permissions were not correct, a flag would be set and an exception thrown.

However, the kernel memory was still speculatively accessed and available in the cache during the mis-speculation window, allowing kernel memory to be read from user space.

What You Can Do Today

Your system is at risk if someone can run malicious code on your machine. This includes your personal computers, your phone, systems with multiple accounts, servers, cloud platforms, and virtualized environments. If your system is isolated from a network and only has a single user account, you probably don't need to worry about this exploit. If you're unsure of whether or not you need to take action, WindRiver's CEO provided a detailed framework for deciding how to respond.

If you are using an Intel chip or a cloud platform, make sure to stay up to date on patches for your system. The KPTI/KAISER patches for Meltdown have been applied to the latest Linux kernels but may need to be migrated to your specific kernel version. Similar patches have also been made for Windows and OS X. ARM has announced that some of its chips are affected by Meltdown, so check and see if your platform is affected.

Spectre is a trickier beast, especially since most modern processors are affected. A complete resolution is only possible through re-architecting the way processors are designed. We will be living with this threat for many years. Spectre-hardening software efforts are in progress, including ARM speculation barrier, MSVC mitigations, and LLVM mitigations.

Solutions for both of these vulnerabilities will be evolving over time, so be sure to stay up-to-date if you are affected.

Reliable References

The announcement has been surrounded with quite a bit of drama: CERT's initial recommended solution was to "replace CPU hardware", Linus Torvalds called Intel's patches "insane" and "complete garbage", and Intel admitted that its patches for Meltdown and Spectre are flawed.

Given all this drama, here are reliable sources you can refer to for more information:


Around the Web

For some non-Spectre-and-Meltdown news, check out the following articles.

Nordic Semiconductor has announced a new nRF91 chipset targeted for low-power IoT systems. The nRF91 System-inPackage (SiP) integrates an LTE cellular modem and transceiver, ARM Cortex-M33 processor, flash memory, RAM, and power management into a single package.

The Vancouver startup Riot Micro announced the RM1000, a low-power cellular baseband chip targeted for IoT applications.

Michael Barr proposes dropping the term "bug" in "Is it a Bug or an Error?". Jack Ganssle shared similar thoughts in "I've Never Had a Bug in My Code"

A new C++ programming language standard is now available.

Arne Mertz of Simplify C++ published a series on the topic of code reviews. The humble code review is one of the most powerful tools that teams can utilize to improve software quality and reduce errors. How can your team improve its review process?

  1. Code Reviews - Why?
  2. Code Reviews - What?
  3. Code Reviews - Preparation
  4. Code Reviews - How?
  5. Code Reviews - The Human Aspect

Articles Published in January

These posts were added to the website in January:

  1. Implementing an Asynchronous Dispatch Queue With FreeRTOS
  2. Getting Started with Snapdragon Flight: Dev Environment Setup & Useful Resources
  3. Implementing an Asynchronous Dispatch Queue with ThreadX
  4. Implementing malloc with FreeRTOS
  5. Jenkins: Configuring a Linux Slave Node
  6. Installing ROS on an NVIDIA Tegra TX2

Popular Articles

These were the most popular articles in January:

  1. Circular Buffers in C/C++
  2. Installing LLVM/Clang on OSX
  3. C++ Casting, or: Oh No, They Broke Malloc!"
  4. std::string vs C-strings
  5. An Overview of C++ STL Containers
  6. Implementing Malloc: First-fit Free List
  7. Creating and Enforcing a Code Formatting Standard with clang-format
  8. A Simple Consulting Services Agreement
  9. A GitHub Pull Request Template for Your Projects
  10. Implementing an Asynchronous Dispatch Queue

Happy hacking!

-Phillip

January 2018: Component Counterfeiting

Happy New Year! Welcome to the January 2018 edition of the Embedded Artistry Newsletter. This is a monthly newsletter of curated and original content to help you build better embedded systems. This newsletter is intended to supplement the website and covers topics not mentioned there.

This month we'll be taking a look at the problem of electronic component counterfeiting and new anti-counterfeiting developments. We'll cover:

  • The problem of component counterfeiting
    • Detecting counterfeit components
    • New DARPA anti-counterfeiting initiatives
    • Using blockchains to establish a component "chain of trust"
    • Steps you can take to protect against counterfeit components
  • Recommended articles from around the web
  • Embedded Artistry website updates and popular posts

The Problem of Component Counterfeiting

Hardware designers face a variety of challenges today. Critical hardware components, such as NAND, DRAM, and OLED displays are experiencing shortages and long lead times. Companies are increasingly compressing schedules and striving to reduce the cost of producing their products. Driven by these schedule, price, and supply constraints, engineers and manufacturers will often acquire components from smaller distributors, electronic markets, scrap electronics dealers, or even eBay. The largest risk of using these untrustworthy sources for component purchases is the risk of receiving counterfeit parts.

Counterfeit components are introduced in a variety of ways, such as recycling old components from end-of-life products, recycling scrap electronic material, selling out-of-spec components, selling factory rejects, creating a cloned part, remarking parts with a higher-grade (e.g. commercial-grade parts marked as industrial-grade), or forging documentation.

One of the largest risks with counterfeit components is that they almost work correctly. Take this example of counterfeit electrical safety outlets - a component you probably don't think twice about:

Authorities in Suffolk County, N.Y. seized counterfeit electrical safety outlets—used in bathrooms, kitchens, and garages to guard against electrical shock—bearing phony UL logos. The bogus parts had no ground-fault-interrupt circuitry. Had they been installed anywhere near water, the results could have been fatal.

Many counterfeit components do not have such egregious and easily detected problems such as missing circuitry. Instead, the electrical characteristics of counterfeit components such as slew rate, current supply, timing, or noise might be out-of-spec. They also tend to be less reliable and exhibit a shorter time-to-failure than their legitimate counterparts. Counterfeit components can wreak havoc on consumer electronics. It would be utterly detrimental if they were to sneak into safety-critical devices like fire alarms, medical devices, or automotive electronics.

Detecting Counterfeit Components

Luckily, those who make counterfeit components are often not very good at it. Legitimate component manufacturers have high quality standards for their parts. In many cases, counterfeit components expose themselves with major packaging flaws. Common visual inspection cues are:

  • Incorrect part numbers
  • Incorrect date codes
  • Impossible date codes
  • Date codes that are in the future
  • Incorrect manufacturer country of origin marking
  • Components with the same lot code shown as being manufactured in different countries
  • Pre-soldered pins
  • Pins pitch is too wide or too narrow
  • Package made with the wrong material
  • Different numbers, shapes, and sizes of IC package indents
  • Laser cut lines in the markings
  • Incorrect font
  • Crooked or misaligned text
  • Incorrect silkscreen on a flexible circuit or PCB
  • Incorrect / incomplete logos
  • Logos that vary from part-to-part
  • Misspellings
  • Using ink-based IC markings that can be removed with acetone

Counterfeit components also give themselves away when comparing die shots between suspected counterfeits and known-good parts. Sometimes counterfeiters do a decent job with package markings, which may cause parts to slip through a visual inspection. Consider this example of a counterfeit Nordic NRF24L01+ transceiver. When the dies between the suspect parts and the legitimate parts are compared, you can clearly see that they are different. Unfortunately, capturing and comparing die shots safely requires the help of a lab.

For more information on spotting counterfeit components (including examples):

DARPA Anti-Counterfeiting Initiatives

The United States Department of Defense (DoD) has taken significant interest in counterfeit components. Most of the components used in military systems are produced outside of the US, where the DoD cannot regulate or influence off-shore IC fabrication. DARPA is sponsoring two anti-counterfeiting programs in order to combat the risk of out-of-spec, unreliable, and counterfeit parts: Integrity and Reliability of Integrated Circuits (IRIS), and Supply Chain Hardware Integrity for Electronics Defense (SHIELD).

The objective of IRIS is to develop techniques for non-destructive IC analysis. One major effort is the Advanced Scanning Optical Microscope (ASOM). ASOM enables researchers to scan ICs and provide sub-micron structural details. The IRIS program also aims to develop modeling and diagnostic techniques to determine the reliability of an IC.

SHIELD is focused on developing a small (100 micron x 100 micron) component that will authenticate electronic components. This component, called a dielet, will be inserted into IC packages by the manufacturer, but will require no electrical connection between the dielet and the host component. The final goal is that the authenticity of a chip can be confirmed on receipt by using a handheld or automated probing device.

For more on the DARPA anti-counterfeiting initiatives:

Using Blockchains

Proposals have started to appear for using blockchains to create a chain-of-custody for electrical components through the supply chain. The key features that blockhains can provide are non-localization, security, and auditability. A public, distributed resource prevents bad actors from hiding information and also helps create a publicly auditable system. Security is controlled through cryptography, which reduces the risk of forgeries and ensures that you can only interact with an account if you possess the correct key. Operations can be replayed and audited by anyone who joins the blockchain network since each transaction is permanently stored in the blockchain.

I think that creating a blockchain-based auditable chain-of-trust is a promising anti-counterfeiting measure despite the current excessive hype around them.

For more on using blockchains to fight counterfeiting:

What You Can Do Today

The best thing you can do to prevent the use of counterfeit parts is to always purchase your components from the original manufacturer or authorized distributors. Reputable suppliers do not want to risk their reputation by supplying bad parts and they do put effort into protecting their supply chain against infection. Buying components from eBay, discount retailers, or electronics markets increases the risk of receiving counterfeit components or recovered scrap material.

A secondary line of defense is incoming product inspection. Components and assemblies should be audited to ensure that counterfeit components are not being used. The inspection can be performed in-house or by a certified lab. If your product does not have an inspection process in place, this paper provides a basic inspection protocol.

We can also significantly reduce the risk of using counterfeit components by changing our expectations. When we apply cost and schedule pressures, engineers and manufacturers will cut corners and purchase from untrustworthy suppliers. By planning for sufficient lead times and paying manufacturer suggested prices we can decrease the likelihood of receiving counterfeit components.

For more on component counterfeiting:

Around the Web

We're already facing component shortages for OLED displays, DRAM, and NAND. It seems that we're likely to face longer lead times for IC packaging in 2018

Amazon announced its stewardship over the FreeRTOS kernel. The new kernel update sports AWS IoT integration right out of the box.

AllAboutCircuits recently posted GPS Times, Atomic Clock Frequencies, and the Increasing Accuracy of GPS. The article gives a quick intro to GPS time and the precision improvements that have been made over the past decades.

Website Updates

  • The Glossary has been reorganized and expanded
  • The Beginners page has new "General Resources", "Hardware" and "Startup" sections
  • The Hardware References page has been expanded with additional resources, including references on counterfeit component detection
  • The Software References page has been updated with more C++ blogs, a list of embedded systems newsletters, general software references, and Jenkins references

Articles Published in December

These posts were added to the website in December:

These were the most popular articles in December:

  1. Circular Buffers in C/C++
  2. Installing LLVM/Clang on OSX
  3. std::string vs C-strings
  4. Implementing Malloc: First-fit Free List
  5. An Overview of C++ STL Containers
  6. A GitHub Pull Request Template for Your Projects
  7. Creating and Enforcing a Code Formatting Standard with clang-format
  8. C++ Casting, or: Oh No, They Broke Malloc!"
  9. memset, memcpy, memcmp, memmove
  10. An Improved Jenkins SCM Sync Configuration Plugin

Thanks for Reading!

Have any feedback, suggestions, interesting articles, or resources to recommend to other developers? Let me know!

Happy hacking!

-Phillip

December 2017: The Future of Microprocessors

Welcome to the December 2017 edition of the Embedded Artistry Newsletter!

This month I'd like to share my research on interesting innovations and research projects that will affect the future of microcontroller design and manufacturing.

The Future of Microprocessor Development and Manufacturing

In the past few months I've written to you about Intel's new FinFET transistor and "hyperscaling" chip design techniques and new DARPA electronic initiatives. I want to share my research on other innovations and programs that will affect microprocessor design and manufacturing.

Bespoke Processors

Most processors designed today are "general purpose" and meant to support a wide range of applications. By relying on general purpose processors, the manufacturing ecosystem can take advantage of economies of scale and enjoy reduced component costs. Even in situations where generic processors are too powerful for our specific application, it's cheaper to purchase an overpowered processor than it is to design an application-specific one.

Over-design is still costly, as unused features still have an impact on product size and power consumption. A research team at the University of Minnesota is investigating methods for identifying unused peripherals and logic gates in these generic processors. The team found that many of their test applications (e.g. FFT, autocorrelation, interpolation filtering) only used about 60% of the logic gates. They then created "bespoke" application-specific processors that removed completely unused circuitry. The resulting chip designs were on average 62% smaller and 50% lower-power than the starting openMSP430 microcontroller design. Since this effort is still early in its development, the solution is not yet cost-effective or manufacturable. However, it does allude to a future where we can create small, low-power, application-specific processors.

More on bespoke processors:

Embedded FPGAs

Field Programmable Gate Array (FPGA) technology allows for designers to describe a hardware/chip design using a programming language such as Verilog or VHDL. Traditionally FPGAs have been expensive, standalone components that are part of a hardware design. However, chip designers are increasingly making use of "embedded FPGAs" (eFPGA) in new microcontroller designs. In fact, you may already be using a chip with eFPGA technology without realizing it!

An embedded FPGA is an IP block that allows an FPGA to be integrated into a microcontroller design. Unlike standalone FPGA chips, eFPGAs rely on normal digital interconnects instead of supporting PHYs and I/O interfaces. eFPGA IP blocks provide the same benefits as standalone FPGAs (such as reprogrammability), but their tight coupling inside of the processor can result in higher communication speed and lower power consumption.

Embedded FPGAs provide a variety of chip design benefits:

  • Reduced impact of design changes - instead of expensive RTL changes, software can be updated
  • Reprogrammable and configurable I/O - allowing a single design to support a variety of I/O combinations (GPIO, UART, USART, I2C, I2S, SPI, etc.)
  • Offloading I/O processing from the MCU
  • Dramatically improved hardware accelerator performance (e.g. AES, SHA, FFT, JPEG encoding)
  • Creating reconfigurable hardware accelerators or implementing multiple accelerators using one mask
  • Maximizing battery life by implementing repetitive DSP/processor operations in a more efficient manner

While designers are primarily using eFPGAs to improve flexibility and reduce the impact of RTL changes, I look forward to a time when we will be able to program eFPGAs directly to maximize performance in our applications.

More on eFPGAs:

Embedded DARPA Initiatives

We covered two new DARPA efforts in October: 3DSoC, which is focused on creating design strategies for 3D circuit layouts; and FRANC, which seeks to overturn the Von Neumann architecture and create a new method for handling memory and logic operations. The other new DARPA initiatives are focused on improving the SoC design process to create a new era of innovation in electronics and application-specific designs:

  • Intelligent Design of Electronic Assets (IDEA) is focused on creating a design framework to enable non-experts to quickly design new complex electronics, including mixed-signal ICs, system-in-package modules, and PCBs
  • Posh Open Source Hardware (POSH) is focused on creating an open-source hardware design and verification framework to simplify SoC development
  • Software Defined Hardware (SDH) is exploring technology to create and improve reconfigurable software and hardware systems for use in data-intensive real-time processing applications, such as autonomous driving
  • Domain-Specific System on a Chip (DDSoC) is focused on creating a single platform to create and program SoCs for application-specific needs

More on the new DARPA initiatives:

Plastic Processors

Researchers at ARM and PragmatIC have been working on PlasticARM, a project focused on creating cheap, disposable micro-controllers printed on plastic. PlasticARM is based on the ARM Cortex-M0 32-bit SoC and currently uses a 2 micron process. The ARM team is also working with the University of Minnesota team to use the bespoke processor technique to reduce size and complexity of the resulting chips.

While not bleeding edge computationally, plastic chips will benefit from an estimated 90% lower IC cost than silicon chips. Plastic chips can also be flexible, thinner than a human hair, and have no rigid interconnection points. This could lead to interesting use cases, such as:

  • Disposable packaging displays
  • Sensors around water pipes to record average water pressure
  • Sensors around gas pipes to detect leaks
  • Sensors telling you whether your food is rancid or not
  • Pill bottle displays telling you whether you forgot to take your pills

More on plastic processors:

Lithographic Printing

Silicon chips are currently produced using the complex photolithographic printing process. Photoresist material is applied to a silicon wafer and spun at high speeds to produce a uniform layer. The photoresist is cured using UV light, and some of the photoresist is removed by a special solution. Afterward, a chemical etching process removes the uppermost substrate layer wherever the wafer is not protected by photoresist. This process is repeated to produce patterned layers of various materials that eventually result in a wafer of functional chips.

Molecular Imprints is a company working on utilizing imprint lithography (IL) to stamp out chips using a process similar to a printing press. Photoresist is applied to the silicon wafer using a method similar to inkjet printing. Then a glass stamp with the etching pattern is lowered onto the wafer, and the stamp draws the photoresist into its grooves via capillary action. Since the stamp is glass, the resist can be cured with UV light while the stamp is still on the wafer.

While there are still quality control problems to solve, the IL process is much simpler than photolithographic printing: simply spray photoresist, check alignment, stamp the wafer, and repeat. While Molecular Imprints is initially targeting hard drive production, I expect we will see printed processors soon enough.

More on Imprint Lithography:

Around the Web

Phil Koopman, of Better Embedded Software has made the course notes for his new embedded software college course publicly available. The course covers code quality, safety, and security. You can also find the course materials on the CMU course website.

Want to get started with ARM assembly? Check out this excellent 7-part series by Azeria Labs covering ARM assembly basics.

Embedded.com took a look at how embedded software development has evolved. Using surveys collected over the past twenty years, they explore evolutions in programming languages, processor usage, OS usage, and more.

Popular Articles

These were the most popular Embedded Artistry articles in November:

  1. Circular Buffers in C/C++
  2. Installing LLVM/Clang on OSX
  3. Implementing Malloc: First-fit Free List
  4. An Overview of C++ STL Containers
  5. std::string vs C-strings
  6. A GitHub Pull Request Template for Your Projects
  7. Creating and Enforcing a Code Formatting Standard with clang-format
  8. clang-format Wrapper Script Examples
  9. Implementing an Asynchronous Dispatch Queue
  10. A GitHub Issue Template for Your Projects

Thanks for Reading!

Have any feedback, suggestions, interesting articles, or resources to recommend to other developers? Respond to this email and let me know!

While you wait on the next edition, check out our website.

Happy hacking!

-Phillip

November 2017

Welcome to the November 2017 edition of the Embedded Artistry Newsletter! This is a monthly newsletter of curated and original content to help you build better embedded systems. This newsletter is intended to supplement the website and covers topics not mentioned there.

This month we'll be covering:

  • The recently announced vulnerability in the WPA2 algorithm
  • Industry standard APIs to make multi-core programming more accessible
  • The EMB² multicore programming framework
  • The recently announced ARM Platform Security Architecture
  • "The Coming Software Apocalypse"

WPA2 Vulnerability: Key Reinstallation Attacks

A serious flaw in the WPA2 security algorithm, which protects our Wifi networks, was announced this month. The attack vectors is dubbed KRACK for "Key Reinstallation Attack." The KRACK vector exploits a flaw in the WPA2 algorithm itself. Any correct implementation is likely to be affected. By exploiting the 4-away handshake protocol used to exchange encryption keys, a third-party can collect and replay the key installation message. This vulnerability enables packet replays, packet forgery, packet decryption, or man-in-the-middle attacks.

Stay alert and update your devices as soon as updates are available. Do not switch back to the less-secure WEP security protocol: once this flaw is patched, WPA2 will remain secure. If you are building or supporting a Wifi-enabled device, check with your chip or SDK vendors for updates and timelines.

More on the KRACK attack vector:


Industry Standard Multicore APIs

Multicore embedded systems are becoming increasingly popular. However, writing programs to use multicore processors effectively is a challenge. The Multicore Association (MCA) aims to improve the adoption of multicore programming by defining and promoting specifications that better enable multicore product development. If you are writing software destined for a multicore embedded system, consider using these APIs to keep your software portable and abstracted from underlying architectures.

The MCA currently defines three multicore APIs, covering task management (MTAPI), resource management (MRAPI), and communication and synchronization between cores (MCAPI).

The aim of the Multicore Task Management API (MTAPI) is to create a standardized API for task-parallel programming on a wide range of hardware architectures. Manually creating and managing threads can be complex, error-prone, and depends on your operating system and hardware. MTAPI abstracts hardware and operating system details and allows programmers to focus on the parallel programming solution. There are no compiler, hardware, or operating system dependencies, and the API is written in C to minimize ABI interoperability problems. The API can be implemented on resource-limited devices and covers a variety of multicore architectures and hardware acceleration units. Task scheduling can be optimized for latency and fairness, enabling its use on systems with soft real-time requirements.

The Multicore Resource Management API (MRAPI) specifies application-level resource management capabilities. This API allows multicore applications to coordinate concurrent access to various system resources.

The Multicore Communications API (MCAPI) defines an API and semantics for communicating and synchronizing processing cores in embedded systems. MCAPI is a message-passing API that is designed for closely-distributed systems (e.g. multiple cores on a single chip, multiple chips on a single board). The API is kept simple to support sufficient functionality while allowing efficient implementations for resource constrained systems.

More information on the MCA standards:


Multicore Framework: Embedded Multicore Building Blocks

Embedded Multicore Building Blocks (EMB²) is an open-source C/C++ library for developing parallel embedded systems applications. EMB² is built on the Multicore Task Management API that we reviewed in the previous section.

EMB² provides generic building blocks for building parallel embedded applications, including basic parallel algorithms, concurrent data structures, and application skeletons. The majority of the framework APIs are non-blocking, avoiding common multi-threaded problems encountered when using locks.The framework utilizes an abstraction layer that makes it easily ported to new operating systems and processor architectures.

EMB² is implemented as a C API with C++ wrappers. The project is based on C99 and C++03 to provide maximum usability in the embedded world. C11 and C++11 can be selected for use of the standard atomic operations instead of the EMB² atomics.

My favorite aspect about this project is the emphasis on quality: the project maintains zero compiler warnings, sports 90% unit test coverage, utilizes static analysis and automated rule checks, and has formally validated pieces of the system. It's refreshing to find a team that cares about quality!

If you're looking for a simple framework to get started with multicore embedded development, check out EMB²:


The ARM Platform Security Architecture

As the news frequently highlights, inadequate security implementations on embedded systems is a major problem. Last September, ARM announced their intentions to work on a platform security architecture to help combat this threat. based on announcements this month, it looks like ARM is delivering on their promise.

Dubbed the Platform Security Architecture (PSA), ARM is focusing on three major components:

  1. Threat Models and Security Analyses derived from a range of typical IoT use cases
  2. Architecture specifications for firmware and hardware
  3. An open source project similar to Arm Trusted Firmware

The PSA is targeted for smaller cores and low-cost devices. Sensitive assets, such as keys and credentials, will be managed by a Secure Processing Environment (SPE) and will be separated from the application firmware.

In addition to the PSA, ARM has announced two new security-related cores. The CryptoIsland-300 is a programmable security core which expands upon the CryptoCell that they announced last year. The SDC-600 is a secure debugging channel that will allow users to enable or disable debugging abilities by using a cryptographic certificate.

The PSA is initially targeted for Cortex-M devices and will include open-source implementation examples. The PSA release is expected in Q1 of 2018. Support for Cortex-R and Cortex-A devices will follow after Cortex-M.

More on the ARM PSA:


The Coming Software Apocalypse

The Atlantic recently published an article titled "The Coming Software Apocalypse". Our world is becoming increasingly digitized and we are encountering more and more flaws in the software we depend on. Even our cars, which were primarily mechanical systems once upon a time, are now comprised of 100 million lines of code. The article dives into some of the challenges involved with the increase in software complexity, primarily focusing on limitations in our intellectual management of large software project. Following this premise, the author advocates increased emphasis on using tools during the development process. Software should be modeled before any code is written, algorithms should be checked with formal methods or tools such as TLA+, and code generators should be used to reduce programmer errors.

I later stumbled across a response to The Atlantic's article titled "Tools are not the Answer". The author of this post emphasizes a point which I wholeheartedly agree with: tools are helpful, but not the complete answer. Programmers must hold themselves to higher standards.

The rebuttal emphasizes that our software woes primarily stem from two causes:

  1. Too many programmers take sloppy short-cuts under schedule pressure.
  2. Too many other programmers think it’s fine, and provide cover.

And the obvious solution:

  1. Raise the level of software discipline and professionalism.
  2. Never make excuses for sloppy work.

This is not to say that tools won't help: our software is still becoming increasingly complex and difficult to manage. We must improve our development processes and hold ourselves to higher standards.

Read more here:


Selected Quotes from the Articles

“Typically the main problem with software coding—and I’m a coder myself,” Bantégnie says, “is not the skills of the coders. The people know how to code. The problem is what to code. Because most of the requirements are kind of natural language, ambiguous, and a requirement is never extremely precise, it’s often understood differently by the guy who’s supposed to code.”

This is the trouble with making things out of code, as opposed to something physical. “The complexity,” as Leveson puts it, “is invisible to the eye.”

The software did exactly what it was told to do. The reason it failed is that it was told to do the wrong thing.

Take error handling and correction seriously in your designs:

But, as described in a report to the FCC, “the situation occurred at a point in the application logic that was not designed to perform any automated corrective actions.”

We already know how to make complex software reliable, but in so many places, we’re choosing not to. Why?

I stood before a sea of programmers a few days ago. I asked them the question I always ask: “How many of you write unit tests on a regular basis?” > Not one in twenty raised their hands.


Website Updates

I added additional C++ references to the Software References page. I also expanded the Glossary with additional terms and an improved organizational scheme.

These were the most popular articles in October:

  1. Circular Buffers in C/C++
  2. Installing LLVM/Clang on OSX
  3. Implementing Malloc: First-fit Free List
  4. std::string vs C-strings
  5. An Overview of C++ STL Containers

Thanks for Reading!

Have any feedback, suggestions, interesting articles, or resources to recommend to other developers? Respond to this email and let me know!

Happy hacking!

-Phillip

October 2017

Welcome to the October 2017 edition of the Embedded Artistry Newsletter! This is a monthly newsletter of curated and original content to help you build better embedded systems. This newsletter is intended to supplement the website and covers topics not mentioned there.

This month we'll be covering:

  • The BlueBorne Bluetooth vulnerability
  • DARPA funds embedded initiatives
  • A helpful introductory RTOS series
  • Amazon launches an FPGA cloud
  • A terrible security flaw discovered in pacemakers
  • Limiting the number of characters printf displays

The BlueBorne Bluetooth Vulnerability

Armis Labs recently announced a series of eight attack vectors that endanger the majority of our Bluetooth devices, including Android, iOS (pre-10.0), Windows, and Linux. The threat is dubbed "BlueBorne", a blend between Bluetooth and airborne. Affected devices are vulnerable to BlueBorne as long as Bluetooth is enabled, even if the device is not discoverable and not paired to the attacker's device. BlueBorne does not require any action to be completed by the user, and the user may never know his device has been compromised. The disclosed vulnerabilities are fully operational and enable a variety of attacks, such as arbitrary code execution, man-in-the-middle, and information leakage.

Bluetooth is a nearly ubiquitous technology and Armis estimates that over 8.2 billion devices may already be affected. Popular libraries like BlueZ which is used on a variety of PC and embedded systems are compromised. It is recommended to turn off Bluetooth when you are not using it until the vulnerabilities have been addressed. Ensure your software is up-to-date and keep an eye out for software updates on your Bluetooth-enabled systems. If you are building a Bluetooth-enabled system, review the technical paper and ensure that your design is not suspect to the disclosed vulnerabilities.

For more on BlueBorne:

DARPA Funds Embedded Initiatives

DARPA has announced that it is providing funding for six new programs with an embedded focus. DARPA is focusing the new initiatives on researching new materials and integration techniques, improving circuit design tools, and creating new system architectures for microelectronics. The programs that sound the most exciting are in the Materials and Integration category: "Three-dimensional Monolithic System-on-a-chip" (3DSoC) and "Foundations Required for Novel Compute" (FRANC).

3DSoC is aimed at improving speeds and reducing power consumption by transitioning from a 2D circuit layout to a 3D circuit layout. By constructing microelectronic circuits in 3D space (e.g. in a cube) we can create novel design strategies and arrangements for our circuits and chips. Migrating to a 3D circuit arrangement is expected to improve logic density, increase computational speed, optimize for size, and reduce power.

FRANC is looking to overturn John von Neumann's computer architecture model which separates the memory and processing blocks. Computations are often limited by the speed at which data can be moved back-and-forth between the processor and memory. As a result, memory transfer speeds are a major bottleneck in many systems. FRANC's aim is to address this bottleneck by developing a new method for handling memory and logic in a combined manner.

It's exciting to see DARPA inducing major changes in our microelectronic circuits and system architectures. Innovations like these will have a significant impact on our industry in the coming decades.

More on DARPA's new initiatives:

An Introductory RTOS Series

The embedded guru Colin Walls has been working on a series called RTOS Revealed. This series of articles is a great way to learn more about real-time and OS concepts, multi-threaded scheduling, and how to use an RTOS. Colin covers basic RTOS concepts and dives into the Nucleus SE RTOS to provide concrete examples. I recommend reviewing the entire series if you are new to the embedded systems space.

Here's the current lineup of articles:

New articles in the series are released on a monthly cadence.

Amazon Launches an FPGA Cloud

Xilinx and Amazon have partnered to launch customizable FPGA instances in the AWS Cloud for applications that can benefit from hardware acceleration. These instances are built on the Xilinx Virtex UltraScale+ FPGAs and can include up to eight FPGAs per instance. Amazon also provides an FPGA Hardware Developer Kit (HDK) to simplify development of FPGA instances.

A Terrible Flaw Discovered in Pacemakers

465,000 U.S. patients have been told to visit a clinic to receive a firmware update for their St. Jude pacemakers. The firmware contains a security flaw which allows hackers within radio range to take control of a pacemaker. This is one more example demonstrating that security must be a crucial aspect of embedded systems design and development. Taking security shortcuts never pays.

Limiting the Number of Characters printf Displays

I originally hesitated about sharing this tip, but I've found myself repeatedly it: You can control how many characters printf spits out for the %s symbol by specifying a precision.

There are two options for controlling the length. You can specify the maximum using a fixed value:

// Fixed precision in the format string
const char * mystr = "This string is definitely longer than what we want to print.";
printf("Here are first 5 chars only: %.5s\n", mystr);

You can also control the length programmatically by using an asterisk (*) in the format string instead of the length. The length is then specified as an argument and is placed ahead of the string you want to print.

// Only 5 characters printed. When using %.*s, add an argument to specify the length before your string
printf("Here are the first 5 characters: %.*s\n", 5, mystr);

Website Updates

This month, the website went through a total visual redesign!

Old pages such as "Around the Web" have been split out into separate pages to provide better categorization:

I've also added some new pages in the "About" section:

These were the most popular articles in September:

  1. Installing Clang/LLVM on OSX
  2. Circular Buffers in C/C++
  3. C++11 Fixed Point Arithmetic Library
  4. An Overview of C++ STL Containers
  5. std::string vs C-strings

Goodbye to a Dear Friend

We lost our dear companion and beloved mascot Jensen to stomach cancer. She will be sorely missed.

IMG_7389.jpg

Thanks for Reading!

Have any feedback, suggestions, interesting articles, or resources to recommend to other developers? Respond to this email and let me know!

September 2017

Welcome to the September 2017 edition of the Embedded Artistry Newsletter! This is a monthly newsletter of curated and original content to help you build better embedded systems. This newsletter is intended to supplement the website and covers topics not mentioned there.

This month we'll be covering:

  • Follow-up Bluetooth Mesh reading recommendations
  • A flexible 2.4GHz antenna suitable for metal surfaces
  • A selection of 2017 embedded market reports that are worth reviewing
  • The incredible engineering behind the Voyager spacecraft
  • How Intel's chip design advances have allowed them to keep Moore's Law alive
  • Building your own SMT reflow oven using a halogen lamp

Bluetooth Mesh Articles

In last month's newsletter, we reviewed two major additions to Bluetooth: Bluetooth 5 and Bluetooth Mesh. Since Bluetooth Mesh is fresh off the press, the Bluetooth SIG has been published some great articles to demystify the new standard.

Check out these recent posts:

A Flexible 2.4GHz Antenna for Metal Surfaces

I was surprised to see an announcement this month regarding an antenna designed for metal surfaces. Building connected devices can be quite a challenging experience. You need to give careful attention to antenna placement and tuning in order to optimize your product's performance. These challenges increase significantly if your product has integrated metal. Metal surfaces can wreak havoc on your antenna design, resulting in antenna detuning, efficiency losses, and reduced communication ranges.

Laird's new mFlexPIFA antenna looks like a promising solution for products with metal enclosures. The mFlexPIFA is about the size of a quarter and is built for 2.4GHz devices. The antenna is adhesive-backed and can be mounted directly onto metal surfaces without detuning the antenna. The design is also flexible, allowing you to mount your antenna to curved surfaces.

Consider this antenna solution in your next connected design, especially if it involves a metal enclosure

More on the FlexPIFA antenna:

2017 Embedded Systems Market Studies & Surveys

"Embedded systems" is a blanket term describing a vast array of devices with differing purposes, computational capabilities, and reliability levels. It's easy to forget the differences in embedded applications and devices, and I find that reviewing market surveys provides some great insight into how the field is developing. I want to share three market surveys with you today:

The Hax hardware accelerator's embedded market study focuses on general trends in hardware development, development directions in different sectors (e.g. consumer, health, industry), automation (which has taken off in China), and hardware funding models.

The AspenCore market survey is less focused on where the market is heading. Instead it dives into areas such as development practices, tools, project timelines, and processor selection.

The Barr Group's embedded systems safety and security survey provides some interesting and alarming insights. They conclude that even though there is increased risk of bodily injury, many automotive design teams are still not using best practices such as static analysis, regression testing, coding standards, and code reviews.

In reading these surveys, I noticed the following general trends:

  • C is still dominating in the embedded space
  • More and more projects are using multiple processors
  • Industrial sensing and automation is rising
  • Devices are becoming increasingly "connected"
  • In many cases best practices are being overlooked

The Voyager Mission Celebration Series

All About Circuits has been celebrating the 40th anniversary of the Voyager I and II spacecraft by dedicating a series of articles to them. The articles dive into the electronics and engineering behind these incredible systems. I have an intense level of respect for the engineers who built such reliable systems without the bountiful computational and technological capabilities that we have today. It would be amazing if any of my devices are still operating 40 years from now, even on the comfortable confines of Earth!

Ten excellent articles have been published in the Voyager spacecraft series:

  1. Voyager Mission Anniversary Celebration Series: Introduction
  2. Powering the Voyager Spacecraft with Radiation: The RTG (Radioisotope Thermoelectric Generator)
  3. Communicating Over Billions of Miles: Long Distance Communications in the Voyager Spacecraft
  4. The Brains of the Voyager Spacecraft: Command, Data, and Attitude Control Computers
  5. Exploring the Solar System with the Voyager Spacecraft’s Cameras, Polarimeters, and Magnetometers
  6. The Infrared Interferometer, Spectrometer, and Radio Astronomy of the Voyager Spacecraft
  7. How the Voyager Missions’ Plasma Science Investigations Teach Us About Solar Winds
  8. The Low Energy Particle Instruments on the Voyager Spacecraft
  9. The Voyager Mission: Insight into Our Solar System
  10. Voyager Anniversary Celebration: 40 Years in Space

The New York Times has recently published "The Loyal Engineers Steering NASA’s Voyager Probes Across the Universe" which takes a look at the human side of the Voyager missions.

While not part of the Voyager series, there was another recent article describing how the space race gave us GPS. If you're interested in the history and theory behind GPS, take a look at "How the Space Race Gave Us GPS Technology".

Intel's New Processor Designs Keeping Moore's Law Alive

This article was published earlier in the year, but I still think its an illuminating read. Back in 2002, Intel announced a breakthrough with their new field effect transistor ("FinFET") design, dubbed the "tri-gate transistor". In 2011, Intel finally announced their first chips built with tri-gate transistors and that the new transistor was the official future of Intel's processing lines. The 2011 announcement involved a 22nm process, and Intel followed that up in 2014 with a 14nm process. Intel is continuing to maintain their 14nm process and finally coming out with a 10nm process this year.

At a time when keeping up with Moore's Law seems like an impossible task, Intel has managed to keep the law alive: both their 14nm and 10nm processes have more than doubled in transistor density. Intel credits their "hyperscaling" techniques, such as reducing the number of dummy gates required to isolate logic cells and stacking metal contacts above gates. These hyperscaling techniques give Intel a transistor density advantage over their competitors at the same process size. For example, Samsung's 10nm process is comparable in transistor density to Intel's 14nm process.

While I don't think I'll be writing firmware for Intel-powered embedded devices in the near future, I'm excited to see the pressure that Intel's 10nm process puts on other chipmakers. Size is a major concern in the embedded world, so I'm certain we will see some of these hyperscaling techniques applied to other chip families in the future.

More on Intel's new architecture, Transistors, and Moore's Law:

Build Your Own SMT Reflow Oven With a Halogen Lamp

I've been slowly building an electronics lab over the years, and I'm lucky enough to possess an oscilloscope, a bench-top DMM, and a logic analyzer. One project I've had in mind is building an SMT reflow oven. Being able to reflow boards would increase assembly and repair capabilities. I was thrilled to find a blog post about building an SMT reflow oven using a halogen lamp. The author was able to build his own SMT reflow oven for ~$30 by using a halogen lamp, an AC dimmer, and a reflow oven controller.

I discovered the SMT reflow project through Dangerous Prototypes. Check out their website if you're looking for electronics projects to tackle in the future.

Website Updates

I've made a few updates to the website:

  • Updated the Development Kits page to have a much nicer presentation style. Each development kit has its own dedicated blog post, allowing me to provide more detailed information for each kit.
  • Added more terms to the Glossary

These were the most popular articles over the past month:

  1. An Overview of C++ STL Containers
  2. Installing LLVM/Clang on OSX
  3. Choosing the Right STL Container: General Rules of Thumb
  4. C++11 Fixed Point Arithmetic Library
  5. Circular Buffers in C/C++

Happy hacking!

-Phillip

August 2017: Bluetooth Edition

Welcome to the August 2017 edition of the Embedded Artistry Newsletter! This is a monthly newsletter of curated and original content to help you build better embedded systems. This newsletter is intended to supplement the website.

This month we'll be covering:

  • MAX17055 - a new fuel gauge chip from Maxim
  • The Bluetooth 5 standard and changes to the PHY
  • Processors and development kits that support Bluetooth 5
  • The new Bluetooth Mesh standard
  • SDKs that support Bluetooth Mesh
  • A handy trick for when you can't read the part numbers on a chip

MAX17055 - New Maxim Fuel Gauge

Those of us who have used TI's GasGauge line understand how frustrating their parts can be. Undocumented behavior in the gauge firmware, required battery characterization, and per-product configuration can lead to frustrating bugs that are hard to debug. Unless you're a large customer, you have very little chance of getting help from TI and problems often stay unresolved.

One of my clients is using a new fuel gauge chip - the MAX17055 Fuel Gauge. Maxim claims that their gauge "eliminates battery characterization requirements and simplifies the host software interaction" and requires only 7µA of operating current. Unlike TI, Maxim provided great direct support for integrating this part into the new system.

Maxim provides a software implementation guide which describes various methods of using the part. By following the software guide, I implemented a driver for my system in less than an hour. Once the new boards arrived, the driver was working perfectly in a matter of minutes and the initial calibration resulted in readings that were within 5% of the actual voltage value. Since the Maxim Fuel Gauge is a learning gauge, this initial accuracy should improve over a few charge/discharge cycles.

If you're looking for a fuel gauge to use in your next battery-powered device, I recommend the MAX17055.


Bluetooth 5 & Bluetooth Mesh

We've seen two major Bluetooth specification releases in the past 12 months: Bluetooth 5 and Bluetooth Mesh.

It typically takes ~6 months for us to see devices once a new specification is released. We're well into that window for Bluetooth 5, so expect to start seeing new devices soon. Bluetooth Mesh is fresh off the press, so now is the time to start familiarizing yourself with the specification to stay ahead of the curve.

Next, I'll describe the changes involved with the Bluetooth 5 and Bluetooth Mesh specifications. I'll also provide some supplementary reading material and showcase some interesting Bluetooth 5 chips and development kits.


Bluetooth 5

The Bluetooth 5 specification was released in December 2016. The new specification claims a variety of improvements:

  • 4x range
  • 2.5x lower power (BLE)
  • 2x speed (supporting a new 2Mbit/s high-throughput mode)
  • Increased advertising data payload from 31B to 255B
  • Ability to chain advertising packets to create extended payloads
  • Improved coexistence with other WiFi, Bluetooth, and 2.4GHz devices via an improved channel hopping algorithm

The speed and range improvements are brought about by changing the physical layer (PHY) of the Bluetooth protocol stack. The Bluetooth PHY now supports two additional modes, which allow for increased speed or increased range. The three Bluetooth PHYs are:

  1. "LE 1M" - the PHY used in Bluetooth 4
  2. "LE 2M" - the 2Mb/s PHY, which doubles the speed of the LE1M PHY
  3. "LE Coded" - the new long-range PHY that adds error correction

The increased range does not involve any transmit power increases. Instead, the increase is provided by improving receiver sensitivity and utilizing Forward Error Correction (FEC). FEC adds redundant data bits to the transmitted packets. These redundant bits allow for error correction to be performed by the receiver and for messages to be correctly decoded at a lower signal-to-noise ratio (SNR). This provides a ~12dB improvement in receiver sensitivity, resulting in the 4x range improvement. Of course, the redundant information does come with a penalty in reduced data throughput due to the need to transmit redundant bits.

You can switch between PHYs by using the new HCI command "LE Set PHY". You can independently select the PHY to use for both transmit (TX) and receive (RX). This means that we can switch between PHY modes depending on our operational situation. Additional HCI commands are defined to support setting the default PHY and for querying the PHY capabilities of remote devices.

Bluetooth 5 is operationally compatible with Bluetooth 4.x devices. However, the changes to the Bluetooth 5 specification have been made at the PHY layer. You will be tied to the LE 1M PHY and will not be able to take advantage of Bluetooth 5 benefits.

For more details on Bluetooth 5, read these:

Bluetooth 5 Chips & Development Kits

Let’s take a look at some development kits and chips from Nordic and Silicon Labs that already support the new standard.

Nordic nRF52

Nordic offers Bluetooth 5 support in its nRF52 line, consisting of three chips: nRF52810, nRF52832, and nRF52840. All of the nRF52 chips support the new CSA algorithm, the LE 2M PHY, and increased advertising packet size. However, only the nRF52840 has support for the new long-range LE Coded PHY.

nRF52810 & nRF52 DK

The nRF52810 is the simplest of the three Bluetooth 5 chips, sporting a Cortex-M4 and a basic set of peripherals. Due to the reduced flash, RAM, and peripheral counts, this chip is useful as a dedicated Bluetooth processor in a multi-chip system.

The nRF52810 itself is not included on a development kit. Nordic recommends the nRF52 DK for exploring the low-end of the nRF52 series. This starter board is compatible with Arduino shields, allowing for some interesting prototyping options. The nRF52 DK utilizes a nRF52832, so you'll want to hold off on using unsupported features if you're going to uses the nRF52810.

nRF52810 Specifications:

  • 32-bit Cortex-M4 64MHz Processor
  • 1.7v to 3.6v operation
  • 192kB flash + 24kB RAM
  • Up to +4dBm output power
  • -96dBm sensitivity, Bluetooth low energy
  • 1 x Master/Slave SPI
  • 1 x Two-wire interface (I²C)
  • 1 x PWM (4 channels)
  • AES HW encryption
  • 8-channel 10/12-bit ADC
  • Quadrature decoder
  • 64-level analog comparator
  • Real Time Counter (RTC)
  • Digital microphone interface (PDM)

More on the nRF52810:

Nordic nRF52832 & Nordic Thingy:52

The nRF52832 is the mid-tier Bluetooth 5 chip. The nRF52832 is built on a Cortex-M4F processor. The nRF52832 provides a significant increase in flash, RAM, and peripherals over the nRF52810. These improvements make the nRF52832 an attractive choice as a primary processor for your system or for exploring new BLE features like IPv6 support. The nRF52832 includes an on-chip NFC tag to support out-of-band pairing. You can utilize the NFC pairing method for a simpler process of exchanging authentication information between two bluetooth devices.

The nRF52 DK supports the nRF52832, but Nordic also sells the Thingy:52 development kit. The Thingy:52 provides you with a variety of environmental sensors (temp, humidity, pressure, air quality, color, and light), a 9-axis IMU (accelerometer, gyro, and compass), a speaker, and a microphone. The range of components provided with this dev kit is impressive and useful for many Bluetooth prototyping scenarios. Nordic also supplies a Thingy:52 app and demo code to get you up and running as quickly as possible.

nRF52832 Specifications:

  • 32-bit ARM Cortex-M4F 64MHz Processor
  • 1.7v to 3.6v operation
  • 512kB flash + 64kB RAM
  • On-chip NFC tag for Out-of-Band (OOB) pairing
  • Up to +4dBm output power
  • -96dBm sensitivity, Bluetooth low energy
  • 3 x Master/Slave SPI
  • 2 x Two-wire interface (I²C)
  • UART (RTS/CTS)
  • 3 x PWM
  • AES HW encryption
  • 12-bit ADC
  • Real Time Counter (RTC)
  • Digital microphone interface (PDM)
  • On-chip balun

More on nRF52832:

nRF52840 & Preview DK

The nRF52840 is the king of the Bluetooth 5 chips and the only chip in the product line that supports 802.15.4 and the new Bluetooth 5 LE Coded PHY. The nRF52840 provides an impressive 1MB of flash and 256kB of RAM.The chip sports additional peripherals, such as the ARM Cryptocell cryptographic co-processor and a USB 2.0 controller. With an improved output power of up to +8dBm, the nRF52840 is definitely the chip to pick if you're looking at long-range Bluetooth communications.

Nordic has released a nRF52840 Preview Development Kit (PDK). This kit is more similar to the nRF52 DK than the Thingy:52. The PDK provides no external peripherals or sensors to play with, but like the nRF52 DK it is compatible with Arduino shields for easy prototyping.

nRF52840 Specifications:

  • 32-bit ARM Cortex-M4F 64MHz Processor
  • 1.7v to 5.5v operation
  • 1MB flash + 256kB RAM
  • Up to +8dBm output power
  • 802.15.4 radio support (ZigBee and Thread)
  • On-chip NFC
  • PPI –Programmable Peripheral Interconnect
  • 48 x GPIO
  • 1 x QSPI
  • 4 x Master/Slave SPI
  • 2 x Two-wire interface (I²C)
  • I²S interface
  • 2 x UART
  • 4 x PWM
  • USB 2.0 controller
  • ARM TrustZone CryptoCell-310 Cryptographic and security module
  • AES 128-bit ECB/CCM/AAR hardware accelerator
  • Digital microphone interface (PDM)
  • Quadrature decoder
  • 12-bit ADC
  • Low power comparator
  • On-chip balun

More on nRF52840:

Silicon Labs EFR32

Silicon Labs offers Bluetooth 5 support in the EFR32 Blue Gecko line of SoCs. Similar to the Nordic nRF52810, the EFR32 series is built upon a Cortex-M4 processor. The EFR32 line sports a whopping +19dBm of programmable output power in their beefiest configuration.

Silicon Labs provides a Blue Gecko Starter Kit to support EFR32 development. The starter kit is modularized to support a wide variety of radio daughter boards for easy prototyping and chip comparisons. The starter kit comes with two Bluetooth radio daughter boards. Only the provided EFR32BG13 radio board supports the LE Coded and LE 2M PHYs. The starter kit contains a few push buttons and a coin cell battery holder, but does not include other on-board peripherals. A wide variety of headers are supplied for your prototyping needs.

Unlike Nordic's nRF52 line, the EFR32 line has many different chip configurations. Also, not all EFR32 chips support the new 2M PHY and LE Coded PHY, so be sure to include those features in your search. Silicon Labs provides a full list of EFR32 SoCs, so you can find one that fits your needs exactly.

Sample EFR32 Specifications using maximum values:

  • ARM Cortex-M4 Processor (up to 40MHz)
  • Up to 1MB of flash
  • Up to 256kB SRAM
  • Up to +19dBm output power
  • AES256/128 hardware accelerator
  • 12-bit ADC
  • Current DAC (4-bit)
  • Up to 4x analog comparators
  • Low-energy UART
  • Up to 4x USART (SPI, UART, I2S, IrDA)
  • Up to 2x I2C
  • Up to 65 GPIOs
  • On-chip balun

EFR32BG12P632F512FM38 Specifications (Blue Gecko Starter Kit):

  • ARM Cortex-M4 40 MHz Processor
  • 512kB Flash + 64kB SRAM
  • +10dBm output power
  • -103.3dBm receiver sensitivity
  • AES-128/256 hardware accelerator
  • 12-bit ADC
  • Current DAC (4-bit)
  • Up to 4x analog comparators
  • 4x UART Ports
  • 3x USART ports (SPI, UART, I2C)
  • 2x I2C ports
  • 31 GPIOs

More on EFR32:


Bluetooth Mesh

Bluetooth Mesh is not included in the Bluetooth 5 standard. It was released in July 2017. Bluetooth Mesh provides the ability for Bluetooth devices to implement a many-to-many (m:m) network with a maximum size of 32,000 devices. Previously we were limited to a one-to-many (1:m) topology, where a central Bluetooth hub was responsible for broadcasting messages to the various nodes. In addition to m:m topology support, Mesh allows devices to relay data to other devices that are not in direct radio range. This re-broadcasting scheme allows the network to cover a larger area than with Bluetooth LE. Since Bluetooth Mesh is built upon Bluetooth LE, it can be utilized by both Bluetooth 4.x and Bluetooth 5 devices. Existing devices in the field can take advantage of Bluetooth Mesh as long as they are capable of firmware updates.

Bluetooth Mesh devices communicate using a publish/subscribe messaging system. Whenever a device publishes a message to a specific topic, all devices who are subscribed to that topic will receive a copy. Mesh also introduces the concept of device "state" which can be adjusted through published messages. The new "model" concept defines a mesh node's messages, states, and behavior.

Bluetooth Mesh utilizes a "managed flooding" approach, allowing for a peer-to-peer multi-path communication network. Since there is no central hub or routing nodes, the network is more resilient to device failures. Messages are retransmitted by devices which are designated as "relays", allowing messages to reach nodes that are not in direct radio range. A message can make a maximum of 127 hops, allowing us to cover quite a large physical area. Devices contain a message cache which is used to determine whether a particular message has been seen before. If it has, the message is discarded and not processed by the stack.

Some of our mesh nodes are likely to be low-power devices which wake up periodically to relay data. Bluetooth Mesh allows us to designate "friend" nodes which are not power constrained. These friend nodes store messages intended for the low-power node. Once the low-power node wakes up, it can request the cached information from its friend. This concept of "friendship" allows us to implement an efficient wakeup schedule to conserve battery life.

Mesh nodes send out regular heartbeat messages to let us know that they are alive. These heartbeat messages allow the network to learn about its topology and help devices avoid unnecessary message retransmissions. There is also a mandatory "health" model which allows devices to send out fault information, such as in low battery or overheating conditions.

Bluetooth SIG is targeting industrial applications so Bluetooth Mesh is designed with security in mind. Every packet is encrypted and authenticated, asymmetric cryptography can be utilized, and security keys get refreshed periodically.

It's possible to utilize multiple mesh networks in the same location. Each mesh network has an identifier which indicates which network the packet belongs to. Also, thanks to the built-in security, devices cannot decrypt or authenticate mesh packets from another mesh network. Each network remains isolated from the other.

Large-scale sensor networks, asset tracking, building automation, and commercial lighting solutions are expected to be the first use cases of the new mesh networking protocol. Multiple projects that I've worked on recently will benefit from switching to Bluetooth Mesh.

More on Bluetooth Mesh:


Bluetooth Mesh SDKs

In order to build a mesh network, we need a compatible software stack. Bluetooth mesh networks require a Bluetooth LE 4.x or 5.0 which supports GAP Broadcaster and Observer roles. These roles are used to advertise and scan for advertising packets. Luckily, both Nordic and Silicon Labs have made our lives easy and provide full-fledge SDKs to support Bluetooth Mesh development.

Nordic

Nordic supplies an nRF5 SDK for Mesh. The SDK is currently noted as "alpha" quality, but you can download the SDK and start prototyping immediately.

The Mesh SDK is compatible with both the nRF51 and nRF52 processor lines. The SDK comes with example applications and models for beaconing, lighting control, and provisioning devices (including provisioning through relay nodes). The SDK allows for node-to-node and node-to-group communications and supports configurable scanning and advertising interval. It's also worth noting Nordic's excellent OTA DFU support remains in place with the Bluetooth Mesh SDK.

More on the Nordic Mesh SDK:

Silicon Labs

Silicon Labs also supplies a Bluetooth Mesh SDK. The SDK is currently noted as "beta" quality. You must create an account and request access to the SDK before you are able to download it. Silicon Labs is lighter on the describing support currently provided in their SDK, but they claim to be compliant with the existing specification and support the LE 2M and LE Coded PHYs. My SDK access request was approved within 24 hours.

More on Silicon Labs Mesh SDK:


Can't See IC Part Numbers? Try this

I learned an awesome trick from EDN's recent article "Simple Trick Lets You See Your Parts". If you need to read the part number on an IC but can't make out the details, simply apply a clear piece of cellophane tape to the part. I've used this trick multiple times in the past week, and it's extremely helpful when I'm away from a microscope.

EDN provides a great warning that you should heed:

Be careful though, as tape can generate considerable ESD, so avoid touching the actual pins of the package.

ReadingICPartNumbers

Website Updates

I've made a few updates to the website:

  • Created a new Development Kits page. Take a look if you need inspiration for your next project or want to experiment with more complex systems.
  • Added clang and Modern C++ references to Around the Web
  • Added language recommendations to Getting Started. I also removed an outdated C++ Idioms reference and added a book recommendation for developing your software career.
  • Expanded the Glossary

These were the most popular articles over the past month:

  1. Ditch Those Built-in Arrays for C++ Containers
  2. Migrating from C to C++: Take Advantage of RAII/SBRM
  3. Using A C++ Object's Member Function with C-style Callbacks
  4. Ditch Your C-style Pointers for Smart Pointers
  5. Installing LLVM/Clang on OSX

July 2017: First Edition!

Welcome to the first edition of the Embedded Artistry Newsletter! It’s a monthly newsletter of curated and original content to help you build better embedded systems. This newsletter is intended to supplement the website and covers topics not mentioned there.

This month we’ll be covering:

  • One of the best books I’ve found on debugging
  • A great introductory embedded programming book
  • cmocka, a C unit testing framework
  • Particle Development Kits
  • Nordic nRF5x SoCs
  • Marvell MW300 SoC
  • MAX17055 Fuel Gauge
  • Shenzhen in the 1980s: Any guesses?
  • Articles I recommend reading