Dispatch Queues

Last week’s article discussed the problem with threads. But since threads are simply a fact of life with our current development tools, how can we reduce risk and complexity?

Apple’s solution: general purpose dispatch queues. In Apple’s own words:

Dispatch queues let you execute arbitrary blocks of code either asynchronously or synchronously with respect to the caller. You can use dispatch queues to perform nearly all of the tasks that you used to perform on separate threads. The advantage of dispatch queues is that they are simpler to use and much more efficient at executing those tasks than the corresponding threaded code.

Dispatch queues have tremendous possibilities when it comes to improving embedded system design.

Often, you need to run tasks asynchronously without blocking the primary execution flow. In traditional threading approaches, each of these situations would require a dedicated helper thread which waits for events in order to process some work.

In a dispatch model, the function can simply be added to a work queue. The job of managing thread resources is simplified and localized to the dispatch library, allowing you to reduce your overall complexity by reducing thread management overhead.

Later this week we will look into how we can actually implement our own dispatch queues.

Read Apple’s “Dispatch Queues” Guide

My Highlights

A major benefit of using dispatch queues: simplicity

When it comes to adding concurrency to an application, dispatch queues provide several advantages over threads. The most direct advantage is the simplicity of the work-queue programming model. With threads, you have to write code both for the work you want to perform and for the creation and management of the threads themselves. Dispatch queues let you focus on the work you actually want to perform without having to worry about the thread creation and management. Instead, the system handles all of the thread creation and management for you. The advantage is that the system is able to manage threads much more efficiently than any single application ever could. The system can scale the number of threads dynamically based on the available resources and current system conditions. In addition, the system is usually able to start running your task more quickly than you could if you created the thread yourself.

Another advantage: Predictability.

However, where dispatch queues have an advantage is in predictability. If you have two tasks that access the same shared resource but run on different threads, either thread could modify the resource first and you would need to use a lock to ensure that both tasks did not modify that resource at the same time. With dispatch queues, you could add both tasks to a serial dispatch queue to ensure that only one task modified the resource at any given time. This type of queue-based synchronization is more efficient than locks because locks always require an expensive kernel trap in both the contested and uncontested cases, whereas a dispatch queue works primarily in your application’s process space and only calls down to the kernel when absolutely necessary.

What about resource usage?

More importantly, the threaded model requires the creation of two threads, which take up both kernel and user-space memory. Dispatch queues do not pay the same memory penalty for their threads, and the threads they do use are kept busy and not blocked.

My primary dispatch queue usage: concurrent queues

A concurrent dispatch queue is useful when you have multiple tasks that can run in parallel. A concurrent queue is still a queue in that it dequeues tasks in a first-in, first-out order; however, a concurrent queue may dequeue additional tasks before any previous tasks finish. The actual number of tasks executed by a concurrent queue at any given moment is variable and can change dynamically as conditions in your application change. Many factors affect the number of tasks executed by the concurrent queues, including the number of available cores, the amount of work being done by other processes, and the number and priority of tasks in other serial dispatch queues.

Serial queues

Serial queues are useful when you want your tasks to execute in a specific order. A serial queue executes only one task at a time and always pulls tasks from the head of the queue. You might use a serial queue instead of a lock to protect a shared resource or mutable data structure. Unlike a lock, a serial queue ensures that tasks are executed in a predictable order. And as long as you submit your tasks to a serial queue asynchronously, the queue can never deadlock.

Some design tips for using dispatch queues:

Dispatch queues themselves are thread safe. In other words, you can submit tasks to a dispatch queue from any thread on the system without first taking a lock or synchronizing access to the queue.

Avoid taking locks from the tasks you submit to a dispatch queue. Although it is safe to use locks from your tasks, when you acquire the lock, you risk blocking a serial queue entirely if that lock is unavailable. Similarly, for concurrent queues, waiting on a lock might prevent other tasks from executing instead. If you need to synchronize parts of your code, use a serial dispatch queue instead of a lock.

2 Replies to “Dispatch Queues”

  1. Very helpful thank you, just a few talking points:

    • What mechanism do "concurrent queues" employ, for shared resources between threads on a single core?

    • "Dispatch queues do not pay the same memory penalty for their threads". How and why can’t we implement threads in that way ?

    1. Hi NG,

      I’ll start with this one:

      "Dispatch queues do not pay the same memory penalty for their threads". How and why can’t we implement threads in that way ?

      In retrospect, that quote seems to apply specifically to the Apple dispatch queue implementation. If you build a dispatch queue at a higher level using a threads, I think you probably will pay the same memory penalty :). I’m not sure what Apple did under the hood.

      What mechanism do "concurrent queues" employ, for shared resources between threads on a single core?

      A concurrent queue can be envisioned as a single dispatch queue (one queue which jobs are submitted to) and multiple threads that pop off the queue and execute those functions. There’s no special mechanism for shared resources. In general, I don’t think functions should be dispatched if they can block or need to access shared resources; you’re then locking up the general dispatch queue and preventing other functions from executing.

      I generally use concurrent dispatch queues to execute callbacks, process events, or run other standalone asynchronous functions.

Share Your Thoughts

This site uses Akismet to reduce spam. Learn how your comment data is processed.