williams-2019

Anthony Graca

Citation #

  • Author: Anthony Williams
  • Date Published: 2019
  • Title: C++ Concurrency in Action, Second Edition
  • Source (url, publisher name, or doi): Manning

Ch 1 - Hello, world of concurrency in C++! #

1.1 What is concurrency? #

1.2 Why use concurrency? #

1.3 Concurrency and multithreading in C++ #

1.4 Getting started #

Ch 2 - Managing threads #

2.1 Basic thread management #

Launching a thread #

Waiting for a thread to complete #

Waiting in exceptional circumstances #

Running threads in the background #

2.2 Passing arguments to a thread function #

2.3 Transferring ownership of a thread #

2.4 Choosing the number of threads at runtime #

2.5 Identifying threads #

Ch 3 - Sharing data between threads #

3.1 Problems with sharing data between threads #

Race Conditions #

Avoiding problematic race conditions #

3.2 Protecting shared data with mutexes #

Using mutexes in C++ #

Structuring code for protecting shared data #

Spotting race conditions inherent in interfaces #

Deadlock: the problem and a solution #

Further guidelines for avoiding deadlock #

Flexible locking with std::unique_lock #

Transferring mutex ownership between scopes #

Locking at an appropriate granularity #

3.3 Alternative facilities for protecting shared data #

Protecting shared data during initialization #

Protecting rarely updated data structures #

Recursive locking #

Ch 4 - Synchronizing concurrent operations #

4.1 - Waiting for an event or other condition #

Waiting for a condition with condition variables #

Building a thread-safe queue with condition variables #

4.2 - Waiting for one-off events with futures #

Returning values from background tasks #

Associating a task with a future #

Making (std::)promises #

Saving an exception for the future #

Waiting from multiple threads #

4.3 - Waiting with a time limit #

Clocks #

Durations #

Time points #

Functions that accept timeouts #

4.4 - Using synchronization of operations to simplify code #

Functional programming with futures #

Synchronizing operations with message passing #

Continuation-style concurrency with the Concurrency TS #

Chaining continuations #

Waiting for more than one future #

Waiting for the first future with a set with when_any #

Latches and barriers in the Concurrency TS #

A basic latch type: std::experimental::latch #

std::experimental::barrier: a basic barrier #

std::experimental::flex_barrier - std::experimental::barrier’s flexible friend #

Ch 5 - The C++ memory model and operations on atomic types #

5.1 - Memory model basics #

Objects and memory locations #

Objects, memory locations, and concurrency #

Modification orders #

5.2 - Atomic operations and types in C++ #

The standard atomic types #

Operations on std::atomic_flag #

Operations on std::atomic<bool> #

Operations on std::atomic<T*>: pointer arithmetic #

Operations on standard atomic integral types #

The std::atomic<> primary class template #

Free functions for atomic operations #

5.3 - Synchronizing operations and enforcing ordering #

The synchronizes-with relationship #

The happens-before relationship #

Memory ordering for atomic operations #

Release sequences and synchronizes-with #

Fences #

Ordering non-atomic operations with atomics #

Ordering non-atomic operations #

Ch 6 - Designing lock-based concurrent data structures #

6.1 - What does it mean to design for concurrency? #

Guidelines for designing data structures for concurrency #

6.2 - Lock-based concurrent data structures #

A thread-safe stack using locks #

A thread-safe queue using locks and condition variables #

A thread-safe queue using find-grained locks and condition variables #

6.3 - Designing more complex lock-based data structures #

Writing a thread-safe lookup table using locks #

Writing a thread-safe list using locks #

Ch 7 - Designing lock-free concurrent data structures #

7.1 - Definitions and consequences #

Types of non-blocking data structures #

Lock-free data structures #

Wait-free data structures #

The pros and cons of lock-free data structures #

7.2 - Examples of lock-free data structures #

Writing a thread-safe stack without locks #

Stopping those pesky leaks: managing memory in lock-free data structures #

Detecting nodes in use with reference counting #

Applying the memory model to the lock-free stack #

Writing a thread-safe queue without locks #

7.3 - Guidelines for writing lock-free data structures #

Guideline: use std::memory_order_seq_cst for prototyping #

Guideline: use a lock-free memory reclamation scheme #

Guideline: watch out for the ABA problem #

Guideline: identify busy-wait loops and help the other thread #

Ch 8 - Designing concurrent code #

8.1 - Techniques for dividing work between threads #

Dividing data between threads before processing begins #

Dividing data recursively #

Dividing work by task type #

8.2 - Factors affecting the performance for concurrent code #

How many processors? #

Data contention and cache ping-pong #

False sharing #

How close is your data? #

Over-subscription and excessive task switching #

8.3 - Designing data structures for multi-threaded performance #

Dividing array elements for complex operations #

Data access patterns in other data structures #

8.4 - Additional considerations when designing for concurrency #

Exception safety in parallel algorithms #

Scalability and Amdahl’s law #

Hiding latency with multiple threads #

Improving responsiveness with concurrency #

8.5 - Designing concurrent code in practice #

A parallel implementation of std::for_each #

A parallel implementation of std::find #

A parallel implementation of std::partial_sum #

Ch 9 - Advanced thread management #

9.1 - Thread pools #

The simplest possible thread pool #

Waiting for tasks submitted to a thread pool #

Tasks that wait for other tasks #

Avoid contention on the work queue #

Work stealing #

9.2 - Interrupting threads #

Launching and interrupting another thread #

Detecting that a thread has been interrupted #

Interrupting a condition variable wait #

Interrupting a wait on std::condition_variable_any #

Interrupting other blocking calls #

Handling interruptions #

Interrupting background tasks on application exit #

Ch 10 - Parallel algorithms #

10.1 - Parallelizing the standard library algorithms #

10.2 - Execution policies #

General effects of specifying an execution policy #

std::execution::sequenced_policy #

std::execution::parallel_policy #

std::execution::parallel_unsequenced_policy #

10.3 - The parallel algorithms from the C++ Standard Library #

Examples of using parallel algorithms #

Counting visits #

Ch 11 - Testing and debugging multi-threaded applications #

Unwanted blocking #

Race conditions #

Reviewing code to locate potential bugs #

Designing for test-ability #

Multi-threaded testing techniques #

Structuring multi-threaded test code #

Testing the performance of multi-threaded code #