Now that we’ve implemented std::mutex
for an RTOS, let’s refactor a library using RTOS-specific calls so that it uses std:mutex
instead.
Since we have a ThreadX implementation for std::mutex
, let’s update our ThreadX-based dispatch queue. Moving to std::mutex
will result in a simpler code structure. We still need to port std::thread
and std::condition_variable
to achieve true portability, but utilizing std::mutex
is still a step in the right direction.
For a quick refresher on dispatch queues, refer to following articles:
- Dispatch Queues
- Implementing an Asynchronous Dispatch Queue
- Implementing an Asynchronous Dispatch Queue with ThreadX
Table of Contents
- How
std::mutex
Helps Us - Refactoring the Asynchronous Dispatch Queue
- Putting It All Together
- Further Reading
How std::mutex
Helps Us
Even though we can’t yet make our dispatch queue fully portable, we still benefit from using std::mutex
in the following ways:
- We no longer have to worry about initializing or deleting our mutex since the
std::mutex
constructor and destructor handles that for us - We can take advantage of RAII to lock whenever we enter a scope, and to automatically unlock when we leave that scope
- We can utilize standard calls (with no arguments!), reducing the burden of remembering the exact ThreadX functions and arguments
But these arguments might not have real impact. Just take a look at the ThreadX native calls:
uint8_t status = tx_mutex_get(&mutex_, TX_WAIT_FOREVER);
// do some stuff
status = tx_mutex_put(&mutex_);
And here’s the std::mutex
equivalent:
mutex_.lock()
//do some stuff
mutex_.unlock()
Don’t you prefer the std::mutex
version?
C++ Mutex Wrappers
While we could manually call lock()
and unlock()
on our mutex object, we’ll utilize two helpful C++ mutex constructs: std::lock_guard
and std::unique_lock
.
The std::lock_guard
wrapper provides an RAII mechanism for our mutex. When we construct a std::lock_guard
, the mutex starts in a locked state (or waits to grab the lock). Whenever we leave that scope the mutex will be released automatically. A std::lock_guard
is especially useful in functions that can return at multiple points. No longer do you have to worry about releasing the mutex at each exit point: the destructor has your back.
We’ll also take advantage of the std::unique_lock
wrapper. Using std::unique_lock
provides similar benefits to std::lock_guard
: the mutex is locked when the std::unique_lock
is constructed, and unlocked automatically during destruction. However, it provides much more flexibility than std::lock_guard
, as we can manually call lock()
and unlock()
, transfer ownership of the lock, and use it with condition variables.
Refactoring the Asynchronous Dispatch Queue
We will utilize both std::lock_guard
and std::unique_lock
to simplify our ThreadX dispatch queue.
Our starting point for this refactor will be the dispatch_threadx.cpp
file in the embedded-resources
repository.
Class Definition
In our dispatch class, we need to change the mutex definition from TX_MUTEX
to std::mutex
:
std::mutex mutex_;
Constructor
Mutex initialization is handled for us by the std::mutex
constructor. We can remove the tx_mutex_create
call from our dispatch queue constructor:
// Initialize the Mutex
uint8_t status = tx_mutex_create(&mutex_, "Dispatch Mutex", TX_INHERIT);
assert(status == TX_SUCCESS && "Failed to create mutex!");
Destructor
Mutex deletion is handled for us by the std::mutex
destructor. We can remove the tx_mutex_delete
call from the dispatch queue destructor:
status = tx_mutex_delete(&mutex_);
assert(status == TX_SUCCESS && "Failed to delete mutex!");
Dispatch
By using std::lock_guard
, we can remove both the mutex get and put calls. RAII will ensure that the mutex is unlocked when we leave the function.
Here’s the dispatch()
implementation using std::lock_guard
:
void dispatch_queue::dispatch(const fp_t& op)
{
std::lock_guard<std::mutex> lock(mutex_);
q_.push(op);
// Notifies threads that new work has been added to the queue
tx_event_flags_set(¬ify_flags_, DISPATCH_WAKE_EVT, TX_OR);
}
If you still wanted to unlock before setting the event flag, use std::unique_lock
instead of std::lock_guard
. Using std::unique_lock
allows you to call unlock()
.
void dispatch_queue::dispatch(const fp_t& op)
{
std::unique_lock<std::mutex> lock(mutex_);
q_.push(op);
lock.unlock();
// Notifies threads that new work has been added to the queue
tx_event_flags_set(¬ify_flags_, DISPATCH_WAKE_EVT, TX_OR);
}
Either approach is acceptable and looks much cleaner than the native calls.
Why would you potentially care about calling
unlock()?
If you are usingstd::lock_guard
, it is possible that the event flags will wake a thread, go to grab the mutex, and then sleep again until thedispatch()
function exits. However, thedispatch()
function will just release the mutex and the thread that is waiting will wake up and resume operation.
Thread Handler
We need to manually lock and unlock around specific points in our thread handler. Instead of std::lock_guard
, we will use std::unique_lock
so we can call unlock()
.
Here’s our simplified thread handler:
void dispatch_queue::dispatch_thread_handler(void)
{
ULONG flags;
uint8_t status;
std::unique_lock<std::mutex> lock(mutex_);
do {
//after wait, we own the lock
if(q_.size() && !quit_)
{
auto op = std::move(q_.front());
q_.pop();
//unlock now that we're done messing with the queue
lock.unlock();
op();
lock.lock();
}
else if(!quit_)
{
lock.unlock();
// Wait for new work
status = tx_event_flags_get(¬ify_flags_,
DISPATCH_WAKE_EVT,
TX_OR_CLEAR,
&flags,
TX_WAIT_FOREVER);
assert(status == TX_SUCCESS &&
"Failed to get event flags!");
lock.lock();
}
} while (!quit_);
// We were holding the mutex after we woke up
lock.unlock();
// Set a signal to indicate a thread exited
status = tx_event_flags_set(¬ify_flags_,
DISPATCH_EXIT_EVT, TX_OR);
assert(status == TX_SUCCESS && "Failed to set event flags!");
}
Looks a bit saner already!
Putting It All Together
Example source code can be found in the embedded-resources
GitHub repository. The original ThreadX dispatch queue implementation can also be found in embedded-resources
.
To build the example, run make
at the top-level or inside of the examples/cpp
folder.
The example is built as a static library. ThreadX headers are provided in the repository, but not binaries or source.
As we implement std::thread
and std::condition_variable
in the future, we will simplify our RTOS-based dispatch queue even further.
Further Reading
- Dispatch Queues
- Implementing an Asynchronous Dispatch Queue
- Implementing an Asynchronous Dispatch Queue with ThreadX
- Implementing std::mutex with ThreadX
std::lock_guard
std::unique_lock
- ThreadX
Using C++ Without the Heap
Want to use C++, but worried about how much it relies on dynamic memory allocations? Our course provides a hands-on approach for learning a diverse set of patterns, tools, and techniques for writing C++ code that never uses the heap.
Learn More on the Course Page