Technique: Selecting Thread Priorities

Many RTOSes and OSes give us an option for setting thread priorities. Many embedded programmers use this feature to provide each thread with a unique priority, creating a hierarchy of threads in the system. This approach is rarely the most desirable, and assigning each thread a unique priority can create problems by affecting latency, responsiveness, deadlocks, and CPU utilization.

Provided below is a framework for thinking about thread priorities and when we really need to modify them.

The key rules of thumb are:

  • Only use different priority levels when latency outweighs throughput and preemption is absolutely required – never in any other case.
    • Examples are needing to meet a critical real-time deadline or to prioritize event detection over event processing
  • Use as few distinct priorities as possible and to reserve unique priorities for those instances where true preemption is required.

Table of Contents:

  1. How to Properly Think About Priority
    1. Priorities on Multi-processor Systems
  2. Default Behavior
  3. When to Adjust Priority
  4. Priority Selection Algorithms
  5. Further Reading

How to Properly Think About Priority

Glennan Carnie provides a helpful view of priority:

Task priority should be thought of as a measure of the ‘determinism of latency of response’ for that task. That is, the higher the priority of a task (relative to its peers) the more predictable (deterministic) its latency of response is.

By changing a task’s priority, we impact its worst-case latency. As we increase the priority of a task, the more predictable its latency becomes. The highest priority task in the system will have the most predictable latency – it’s almost constant, with some minor variations depending on when the next preemption point is hit.

So what does priority mean for low-priority tasks? Here’s Glennan again:

for the low priority task the latency is anywhere from the minimum up to some (possibly unpredictable) maximum. In fact, if we’re not careful, our highest priority task may be ready to access again before the lowest priority task has even had its first access – so-called ‘task starvation’.

Task starvation is not be ignored. It is a common error with multithreaded programs using different priorities.

Priorities on Multi-processor Systems

Glennan Carnie provides a clarifying note about priorities on multi-processor systems:

a huge caveat for multi-processor systems: Priority only has meaning if the number of tasks exceeds the number of processors. Consider the extreme case where each task has its own processor. Each task, being the only task waiting to execute, will execute all the time. Therefore it is always at the highest priority (on that processor). If your design assigns multiple tasks to multiple processors then you must appreciate (and account for) the fact that priorities only have meaning on each individual processor. Priority no longer becomes a system-wide determinant.

Default Behavior

The paper Proper Priority Assignment Can Have a Major Effect on Real-Time Performance provides excellent guidance on the default approach we should use for priority:

Developers can best deal with the somewhat uncertain context switch overhead caused by thread priority selection by keeping as many threads as possible at the same priority level. In other words, only use different priority levels when latency outweighs throughput and preemption is absolutely required – never in any other case.

And when multiple priority levels are used:

The developer is encouraged to use as few distinct priorities as possible and to reserve unique priorities for those instances where true preemption is required.

What does running multiple threads at the same priority indicate? Glennan Carnie clears this up for us:

If all your tasks run at the same priority you effectively have no priority. Most pre-emptive kernels will typically have algorithms such as time-slicing between equal-priority tasks to ensure every task gets a ‘fair share’.

There are many benefits to this default:

By running multiple threads at the same priority, rather than assigning them each a unique priority, the system designer can avoid unnecessary context switches and reduce RTOS overhead.
Assigning multiple threads the same priority also makes it possible for the RTOS to properly implement priority inheritance, round-robin scheduling, and time-slicing.

There are downsides to using unique priorities for every thread:

The use of unique priorities might also make system performance unpredictable. Loss of predictability occurs because the context switch overhead varies as a result of the sequence of thread activation, rather than in a prescribed fashion, as with the round-robin scheduling used with threads of equal priority.

When to Adjust Priority

Unique priorities are primarily useful when latency is more important than throughput, especially for critical tasks with real-time deadlines.

From Proper Priority Assignment Can Have a Major Effect on Real-Time Performance:

While use of unique priorities might result in more context switches and reduced throughput than running multiple threads at the same priority, in some instances it is the appropriate thing to do. For example, if latency is more important than throughput, in the previous example, we would want Thread A to run as soon as a message arrives in its queue, rather than waiting for its round-robin turn. To make sure that happened, we’d make Thread A higher in priority than Thread D.

Glennan’s article uses a pipe-and-filter architecture example, showing how priorities can be adjusted to optimize the architecture for either event detection or throughput. His primary note is that we adjust task priorities when we need to achieve system performance requirements.

Also, when must keep in mind that when have perfectly-tuned individual thread priorities, we can throw the system off by introducing a new thread. Glennan again:

The introduction of (in this case) another medium-priority task may slew the latency predictability of our original medium-priority task. For example, what happens if the new task runs for a significant period of time? It cannot be pre-empted by our filter task. If we are unlucky (and we so often are!) this can cause our system to stop meeting its performance requirements – even though there is no change in the original code!

Priority Selection Algorithms

To ensure your priority selections can actually meet your real-time deadlines, consider using an algorithm such as:

Further Reading

For more on selecting thread priorities:

Paper: Proper Priority Assignment Can Have a Major Effect on Real-Time Performance

2 April 2020 by Phillip Johnston • Last updated 21 May 2020Express Logic, the creators of the ThreadX RTOS, published this white paper, titled “Proper Priority Assignment Can Have a Major Effect on Real-Time Performance”. The white paper looks at two common thread priority approaches with a multi-threaded message passing example: threads with unique priorities, and threads with common priority levels that take advantage of round robin scheduling. The paper discusses how each design impacts throughput, responsiveness, and the number of context switches. As our system designs often require the proper balance of these two qualities, we do well to …

crect: A C++14 Library for Generating a Stack Resource Policy Scheduler at Compile Time

The crect project (pronounced correct) is a C++14 library for generating a scheduler for Cortex-M microcontrollers at compile-time. crect uses the Cortex-M's Nested Vector Interrupt Controller (NVIC) to implement a Stack Resource Policy (SRP) scheduler which guarantees deadlock-free and data-race-free execution. crect is built upon the Kvasir Meta-programming Library, which is also the foundation of …

FreeRTOS Task Notifications: A Lightweight Method for Waking Threads

I was recently implementing a FreeRTOS-based system and needed a simple way to wake my thread from an ISR. I was poking around the FreeRTOS API manual looking at semaphores when I discovered a new feature: task notifications. FreeRTOS claims that waking up a task using the new notification system is ~45% faster and uses …

Refactoring the ThreadX Dispatch Queue To Use std::mutex

Now that we've implemented std::mutex for an RTOS, let's refactor a library using RTOS-specific calls so that it uses std:mutex instead. Since we have a ThreadX implementation for std::mutex, let's update our ThreadX-based dispatch queue. Moving to std::mutex will result in a simpler code structure. We still need to port std::thread and std::condition_variable to achieve …

Implementing an Asynchronous Dispatch Queue with FreeRTOS

We previously provided an implementation of a dispatch queue using ThreadX RTOS primitives. In this article, I'll provide an example C++ dispatch queue implementation using the popular FreeRTOS. We'll start with a review of what dispatch queues are. If you're familiar with them, feel free to skip to the following section. Table of Contents: A …

Implementing an Asynchronous Dispatch Queue with ThreadX

I previously introduced the concept of dispatch queues and walked through the creation of a simple C++ dispatch queue implementation. The original dispatch queue example is implemented using std::mutex, std::thread, and std::condition_variable. Today I’d like to demonstrate the creation of a dispatch queue using ThreadX RTOS primitives instead of the C++ builtin types. We’ll start …

October 2017

2 October 2017 by Phillip Johnston • Last updated 28 September 2019 Welcome to the October 2017 edition of the Embedded Artistry Newsletter! This is a monthly newsletter of curated and original content to help you build better embedded systems. This newsletter is intended to supplement the website and covers topics not mentioned there. This month we’ll be covering: The BlueBorne Bluetooth vulnerabilityDARPA funds embedded initiativesA helpful introductory RTOS seriesAmazon launches an FPGA cloudA terrible security flaw discovered in pacemakersLimiting the number of characters printf displays The BlueBorne Bluetooth Vulnerability Armis Labs recently announced a series of eight attack vectors …