Many RTOSes and OSes give us an option for setting thread priorities. Many embedded programmers use this feature to provide each thread with a unique priority, creating a hierarchy of threads in the system. This approach is rarely the most desirable, and assigning each thread a unique priority can create problems by affecting latency, responsiveness, deadlocks, and CPU utilization.
Provided below is a framework for thinking about thread priorities and when we really need to modify them.
The key rules of thumb are:
- Only use different priority levels when latency outweighs throughput and preemption is absolutely required – never in any other case.
- Examples are needing to meet a critical real-time deadline or to prioritize event detection over event processing
- Use as few distinct priorities as possible and to reserve unique priorities for those instances where true preemption is required.
Table of Contents:
- How to Properly Think About Priority
- Default Behavior
- When to Adjust Priority
- Priority Selection Algorithms
- Further Reading
How to Properly Think About Priority
Glennan Carnie provides a helpful view of priority:
Task priority should be thought of as a measure of the ‘determinism of latency of response’ for that task. That is, the higher the priority of a task (relative to its peers) the more predictable (deterministic) its latency of response is.
By changing a task’s priority, we impact its worst-case latency. As we increase the priority of a task, the more predictable its latency becomes. The highest priority task in the system will have the most predictable latency – it’s almost constant, with some minor variations depending on when the next preemption point is hit.
So what does priority mean for low-priority tasks? Here’s Glennan again:
for the low priority task the latency is anywhere from the minimum up to some (possibly unpredictable) maximum. In fact, if we’re not careful, our highest priority task may be ready to access again before the lowest priority task has even had its first access – so-called ‘task starvation’.
Task starvation is not be ignored. It is a common error with multithreaded programs using different priorities.
Priorities on Multi-processor Systems
Glennan Carnie provides a clarifying note about priorities on multi-processor systems:
a huge caveat for multi-processor systems: Priority only has meaning if the number of tasks exceeds the number of processors. Consider the extreme case where each task has its own processor. Each task, being the only task waiting to execute, will execute all the time. Therefore it is always at the highest priority (on that processor). If your design assigns multiple tasks to multiple processors then you must appreciate (and account for) the fact that priorities only have meaning on each individual processor. Priority no longer becomes a system-wide determinant.
Default Behavior
The paper Proper Priority Assignment Can Have a Major Effect on Real-Time Performance provides excellent guidance on the default approach we should use for priority:
Developers can best deal with the somewhat uncertain context switch overhead caused by thread priority selection by keeping as many threads as possible at the same priority level. In other words, only use different priority levels when latency outweighs throughput and preemption is absolutely required – never in any other case.
And when multiple priority levels are used:
The developer is encouraged to use as few distinct priorities as possible and to reserve unique priorities for those instances where true preemption is required.
What does running multiple threads at the same priority indicate? Glennan Carnie clears this up for us:
If all your tasks run at the same priority you effectively have no priority. Most pre-emptive kernels will typically have algorithms such as time-slicing between equal-priority tasks to ensure every task gets a ‘fair share’.
There are many benefits to this default:
By running multiple threads at the same priority, rather than assigning them each a unique priority, the system designer can avoid unnecessary context switches and reduce RTOS overhead.
Assigning multiple threads the same priority also makes it possible for the RTOS to properly implement priority inheritance, round-robin scheduling, and time-slicing.
There are downsides to using unique priorities for every thread:
The use of unique priorities might also make system performance unpredictable. Loss of predictability occurs because the context switch overhead varies as a result of the sequence of thread activation, rather than in a prescribed fashion, as with the round-robin scheduling used with threads of equal priority.
When to Adjust Priority
Unique priorities are primarily useful when latency is more important than throughput, especially for critical tasks with real-time deadlines.
From Proper Priority Assignment Can Have a Major Effect on Real-Time Performance:
While use of unique priorities might result in more context switches and reduced throughput than running multiple threads at the same priority, in some instances it is the appropriate thing to do. For example, if latency is more important than throughput, in the previous example, we would want Thread A to run as soon as a message arrives in its queue, rather than waiting for its round-robin turn. To make sure that happened, we’d make Thread A higher in priority than Thread D.
Glennan’s article uses a pipe-and-filter architecture example, showing how priorities can be adjusted to optimize the architecture for either event detection or throughput. His primary note is that we adjust task priorities when we need to achieve system performance requirements.
Also, when must keep in mind that when have perfectly-tuned individual thread priorities, we can throw the system off by introducing a new thread. Glennan again:
The introduction of (in this case) another medium-priority task may slew the latency predictability of our original medium-priority task. For example, what happens if the new task runs for a significant period of time? It cannot be pre-empted by our filter task. If we are unlucky (and we so often are!) this can cause our system to stop meeting its performance requirements – even though there is no change in the original code!
Priority Selection Algorithms
To ensure your priority selections can actually meet your real-time deadlines, consider using an algorithm such as:
Further Reading
For more on selecting thread priorities: