Posts tagged with ‘Bibliography’ (8)

tate-2010

Citation #

@book{tate2010seven,
  title={Seven languages in seven weeks: a pragmatic guide to learning programming languages},
  author={Tate, Bruce},
  year={2010},
  publisher={The Pragmatic Bookshelf}
}

Summary. What are the statements being made? #

butcher-2014

Citation #

@book{butcher2014,
  title={Seven Concurrency Models in Seven Weeks: When Threads Unravel},
  author={Butcher, Paul},
  year={2014},
  publisher={The Pragmatic Bookshelf}
}

Summary. What are the statements being made? #

1. Introduction #

Concurrent or Parallel? #

  • Concurrent and parallel refer to two related but different things
  • “A concurrent program has multiple logical threads of control. These threads may or may not run in parallel (Butcher 2014, 1).”
    • Concurrency is an aspect of the problem domain, meaning your algorithm needs to handle simultaneous events
    • “Concurrency is about dealing with lots of things at once - rob pike”
  • “A parallel program potentially runs more quickly than a sequential program by executing different parts of the computation simultaneously in parallel. It may or may not have more than one logical thread of control”
    • Parallelism is an aspect of the solution domain, meaning you want to make your program faster
    • “Parallelism is about doing lots of things at once - rob pike”
  • Traditional threads and locks don’t provide any direct support for parallelism.
    • In order to exploit multiple cores with threads and locks, you need to create a concurrent program and then run it on parallel hardware.
    • This is problematic because concurrent programs are nondeterministic but parallel programs are usually not

Parallel Architecture #

  • There are multiple levels of parallelism
    • Moving from 8-bits to 32-bits is a form of Bit-Level Parallelism. Adding two 32-bit numbers in a 8-bit architecture would take multiple steps. Doing this operation in a 32-bit system is done in a single step
    • CPU architectures use pipeling, out-of-order execution, and speculative execution to obtain instruction instruction-level parallelism
    • Data-parallelism is achieved by applying the same operations to a large amount of data in parallel. Imagine increasing the brightness of an image where each pixel easily handled by a GPU
  • What we are interested in is Task-Level Parallelism
    • There are two models of multiprocessor architectures:
      1. Shared-memory :: where each processor can access any memory location and interprocess communication is done through memory
      2. Distributed memory :: where each processor has its own local memory and IPC is done via the network.

Concurrency: Beyond Multiple Cores #

  • Concurrency is key to software being responsive, fault tolerant, efficient, and simple

Responsive #

  • the world is concurrent so software should be concurrent to interact with it properly
  • Examples:
    1. a mobile phone can play music, talk to the network, and pay attention to touch gestures all at the same time
    2. an IDE checks for syntax in the background while code is being typed
    3. A flight system simultaneously monitors sensors, displays information to the pilot, obeys commands, and moves control surfaces
  • Concurrency is key to responsive systems.
    • Doing things in the background avoids forcing users to wait and stare at a loading screen.

Fault Tolerant #

  • Distributed systems are fault tolerant. Shutdown on one data center doesn’t halt the entire system
  • Concurrency also enables fault detection.
    • A task that fails can notify a separate task to perform remedial action.
    • Sequential software can never be as resilient as concurrent software.

Simple #

  • Threading bugs are difficult to diagnose.
  • Concurrent solutions can be simpler and clearer than its sequential equivalent
    • Translating a concurrent real-world problem to its sequential solution hides detail and requires more work

The Seven Models #

  1. Threads and locks
  2. Functional programming
    • eliminates mutable state so functional programs are intrinsically thread-safe
  3. The Clojure Way - Separating identity and state
  4. Actors
    • Concurrent programming model with strong support for fault tolerance and resilence
  5. Communicating Sequential Processes
    • Emphasizes channels for communication
  6. Data Parallelism - Using GPUs
  7. The Lambda Architecture
    • Big data with MapReduce and stream processing to handle terabytes of data

2. Threads and Locks #

  • “Threads-and-locks programming is like a Ford Model T. It will get you from point A to point B, but it is primitive, difficult to drive, and both unreliable and dangerous compared to newer technology (Butcher 2014, 9).

The Simplest Thing that Could Possibly Work #

  • Threads and locks are little more than a formalization of what the underlying hardware actually does.
    • similar idea to pointers and goto statements

Day 1: Mutual Exclusion and Memory Models #

Mutual exclusion
Using locks to ensure that only one thread can access data at a time.
  • usage avoids race conditions and deadlocks

Creating a thread #

public class HelloWorld {
  public static void main(String[] args) throws InterruptedException {
    Thread myTrhead = new Thread() {
        public void run() {
          System.out.println("Hello from new thread");
        }
      };
    myThread.start();
    Thread.yield(); // necessary since myThread has some start time
    System.out.println("Hello from main thread");
    myThread.join();
  }
}

Our First Lock #

  • We can try to create 2 threads to count to 10,000 however the code below encounters issues
    • Instead of the expected output of 20,000 we see two behaviors
      1. Result is always some number below what we expect
      2. Result is always different
    • This is caused by a race condition when two threads call increment() simultaneously
      • when two threads read count at the same time and write at the same time, we encounter a case where the same value is incremented
public class Counting {
  public static void main(String[] args) throws InterruptedException {
    class Counter {
      private int count = 0;
      public void increment() { ++count; }
      public int getCount() { return count; }
    }
    final Counter counter = new Counter();
    class CountingThread extends Thread {
      public void run() {
        for (int x = 0; x < 10000; ++x) {
           counter.increment();
        }
      }
    }

    CountingThread t1 = new CountingThread();
    CountingThread t2 = new CountingThread();
    t1.start(); t2.start();
    t1.join(); t2.join();
    System.out.println(counter.getCount());
  }
}
  • The solution is to synchronize access to count by using java’s intrinsic lock
    • This means whenever increment() is called, it claims the Counter object’s lock so that when another thread tries to access it, it is blocked until the lock is free.
    • this leads to the correct output of 20,000
public class Counting {
  public static void main(String[] args) throws InterruptedException {
    class Counter {
      private int count = 0;
      public synchronized void increment() { ++count; }
      public int getCount() { return count; }
    }
    final Counter counter = new Counter();
    class CountingThread extends Thread {
      public void run() {
        for (int x = 0; x < 10000; ++x) {
           counter.increment();
        }
      }
    }

    CountingThread t1 = new CountingThread();
    CountingThread t2 = new CountingThread();
    t1.start(); t2.start();
    t1.join(); t2.join();
    System.out.println(counter.getCount());
  }
}

Issue 1: Race Conditions #

Issue 2: Memory Visibility #

Issue 3: Deadlock #

Day 2: Beyond Intrinsic Locks #

Day 3: On the Shoulders of Giants #

3. Functional Programming #

  • p 49

If it Hurts, Stop Doing It #

Day 1: Programming Without Mutable State #

Day 2: Functional Parallelism #

Day 3: Functional Concurrency #

4. The Clojure Way - Separating Identity from State #

  • p 85

The Best of Both Worlds #

Day 1: Atoms and Persistent Data Structures #

Day 2: Agents and Software Transactional memory #

Day 3: In Depth #

5. Actors #

  • p 115

More Object-Oriented than Objects #

Day 1: Messages and Mailboxes #

Day 2: Error Handling and Resilience #

Day 3: Distribution #

6. Communicating Sequential Processes #

  • p 153

Communication Is Everything #

Day 1: Channels and Go Blocks #

Day 2: Multiple Channels and IO #

Day 3: Client-Side CSP #

7. Data Parallelism #

  • p 189

The Supercomputer Hidden in Your Laptop #

Day 1: GPGPU Programming #

Day 2: multiple Dimensions and Work-Groups #

Day 3: OpenCL and OpenGL - Keeping it on the GPU #

8. The Lambda Architecture #

  • p 223

Parallelism Enables Big Data #

Day 1: MapReduce #

Day 2: The Batch Layer #

Day 3: The Speed Layer #

9. Wrapping Up #

  • p 263

Next #

butcher-2014

Citation #

@book{butcher2014seven,
  title={Seven Concurrency Models in Seven Weeks: When Threads Unravel},
  author={Butcher, Paul},
  year={2014},
  publisher={The Pragmatic Bookshelf}
}

Summary. What are the statements being made? #

1. Introduction #

Concurrent or Parallel? #

  • Concurrent and parallel refer to two related but different things

Parallel Architecture #

Concurrency: Beyond Multiple Cores #

The Seven Models #

2. Threads and Locks #

  • p 9

The Simplest Thing that Could Possibly Work #

Day 1: Mutual Exclusion and Memory Models #

Day 2: Beyond Intrinsic Locks #

Day 3: On the Shoulders of Giants #

3. Functional Programming #

  • p 49

If it Hurts, Stop Doing It #

Day 1: Programming Without Mutable State #

Day 2: Functional Parallelism #

Day 3: Functional Concurrency #

4. The Clojure Way - Separating Identity from State #

  • p 85

The Best of Both Worlds #

Day 1: Atoms and Persistent Data Structures #

Day 2: Agents and Software Transactional memory #

Day 3: In Depth #

5. Actors #

  • p 115

More Object-Oriented than Objects #

Day 1: Messages and Mailboxes #

Day 2: Error Handling and Resilience #

Day 3: Distribution #

6. Communicating Sequential Processes #

  • p 153

Communication Is Everything #

Day 1: Channels and Go Blocks #

Day 2: Multiple Channels and IO #

Day 3: Client-Side CSP #

7. Data Parallelism #

  • p 189

The Supercomputer Hidden in Your Laptop #

Day 1: GPGPU Programming #

Day 2: multiple Dimensions and Work-Groups #

Day 3: OpenCL and OpenGL - Keeping it on the GPU #

8. The Lambda Architecture #

  • p 223

Parallelism Enables Big Data #

Day 1: MapReduce #

Day 2: The Batch Layer #

Day 3: The Speed Layer #

9. Wrapping Up #

  • p 263

Next #

gourley-totty-2002

Citation #

@book{10.5555/555429,
author = {Totty, Brian and Gourley, David and Sayer, Marjorie and Aggarwal, Anshu and Reddy, Sailu},
title = {Http: The Definitive Guide},
year = {2002},
isbn = {1565925092},
publisher = {O'Reilly \& Associates, Inc.},
address = {USA},
abstract = {Web technology has become the foundation for all sorts of critical networked applications and far-reaching methods of data exchange, and beneath it all is a fundamental protocol: HyperText Transfer Protocol, or HTTP. HTTP: The Definitive Guide documents everything that technical people need for using HTTP efficiently. A reader can understand how web applications work, how the core Internet protocols and architectural building blocks interact, and how to correctly implement Internet clients and servers.}
}

Summary. What are the statements being made? #

Part I. HTTP: The Web’s Foundation #

1. Overview of HTTP #

  • overview:
    • How web clients and servers communicate
    • Where resources and web content come from
    • How web transactions work
    • The format of the messages used for HTTP communication
    • The underlying TCP network transport
    • The different variations of the HTTP protocol
    • Some of the many HTTP architectural components installed aorund the internet

Web Clients and Servers #

  • web content lives on web servers.
  • Web servers speak the HTTP protocol
  • Clients send HTTP requests to servers, and the servers return the requested data in HTTP responses.
  • Example:
    • A web browser is a HTTP client
    • when you browse “http://www.oreilly.com/index.html”, the browser sends an HTTP request to the server www.oreilly.com
    • server tries to find the desired object, which is “/index.html”
    • if successful, the server sends the object to the client in an HTTP response

Resources #

  • Web servers host web resources
    • like a static file. could contain anything
    • text files, HTML files, ms word files, acrobat pdf files, jpeg image files, avi movie files, or anything else
    • can also be dynamic files based on identity or requested information. like youtube or facebook
  • a resource is any kind of content source.

Media Types #

  • HTTP tags each object with a data format label called a MIME type
    • MIME == Multipurpose Internet Mail Extensions
    • was originally used for different email systems but worked well for HTTP to describe multimedia content
  • Web clients look at the MIME type to see how it should handle the requested object
    • Content-type: image/jpeg

URIs #

  • each web server resource has a name. The resource name is called a uniform resource identifier or URI
    • like the postal address of the internet
  • URIs come in two flavors, URLs and URNs

URLs #

  • uniform resource locator is the most common form of resource identifier
  • URLs follow the 3 part format:
    1. first part is the scheme and it describes the protocol used
    2. second part gives the server address
    3. the rest names a resource on the web server
      • /specials/saw-blade.gif
  • Almost every URI is a URL

Transactions #

  • An HTTP transaction consists of a request command, sent from the client, and a response result, sent from the server
    • Communication happens with blocks of data called HTTP messages

Methods #

  • HTTP supports several different request commands, called HTTP methods
    • Every request message has a method that tells the server what action to perform
  • For example:
    • GET: Send named resource from the server to the client
    • PUT: Store data from client to a named server resource
    • DELETE: Delete the named resource from a server
    • POST: Send client data into a server gateway application
    • HEAD: Send just the HTTP headers from the response for the named resource

Status Codes #

  • Every HTTP response contains a status code.
    • 3 digit code
  • Status code classes
    • 100-199: Informational
    • 200-299: Successful
    • 300-399: Redirection
    • 400-499: Client error
    • 500-599: Server error
  • For example:
    • 200: OK. Document returned correctly
    • 302: Redirect. Go someplace else to get the resource
    • 404: Not found. Can’t find this resource

Web Pages can consist of multiple objects #

  • multiple HTTP transactions can be made to populate a web page.
    • like requesting images
  • HTTP requests can be made to different servers

2. URLs and Resources #

  • URLs are the standardized names of the Internet’s resources
  • This chapter is about
    • URL syntax and what the URL components mean and do
    • URL shortcuts
    • URL encoding
    • Common URL schemes

3. HTTP Messages #

  • This chapter is about how HTTP messages contains the data to be moved. Focuses on how to create and understand them
    • The three parts of HTTP messages (start line, headers, and entity body)
    • The differences between request and response messages
    • The various functions (methods) that request messages support
    • The various status codes that are returned with response messages
    • What the various HTTP headers do

Parts of a Message #

  • Each message consists of three parts
    1. start line :: describes the message
    2. headers :: contain attributes that describe the body
    3. body :: optionally contains data
  • Start line and headers are ASCII while body could be in binary

Message Syntax #

  • All HTTP messages are either request messages or response messages
    • Request message request an action from a web server
    • Response message carry results of a request back to a client.

Start Lines #

  • All HTTP messages begin with a start line
    • start line for a request message says what to do
    • start line for a response message says what happened
  • Request line
    • request messages ask servers to do something to a resource
    • describes what operation to perform and a request URL describing the resource to perform the method
    • also contains HTTP version
  • Response line
    • Response message carry status and any resulting data from an operation
    • contains HTTP version, numeric status code, and a textual reason phase

4. Connection Management #

  • This chapter is about:
    • how HTTP uses TCP connections
    • Delays, bottlenecks and clogs in TCP connections
    • HTTP optimizations
    • Dos and don’t for managing connections

TCP Connections #

  • 7 step process:
    1. Browser extracts the hostname from URL
    2. Browser looks up the IP address for this hostname from DNS
    3. Browser gets the port number, usually implied 80 or 443
    4. Browser makes a TCP connection to ip address at a specific port
    5. Browser sents a HTTP GET request message to the server
    6. Browser reads the HTTP response message from server
    7. Browser closes the connection

Part II. HTTP Architecture #

5. Web Servers #

  • p 109

6. Proxies #

  • p 129

7. Caching #

  • p 161

8. Integration Points: Gateways, Tunnels, and Relays #

  • p 197

9. Web Robots #

  • p 215

10. HTTP-NG #

  • p 247

Part III. Identification, Authorization, and Security #

11. Client Identification and Cookies #

  • p 257

12. Basic Authentication #

  • p 277

13. Digest Authentication #

  • p 286

14. Secure HTTP #

  • p 306

Part IV. Entities, Encodings, and Internationalization #

15. Entities and Encodings #

  • p 341

16. Internationalization #

  • p 370

17. Content Negotiation and Transcoding #

  • p 395

Part V. Content Publishing and Distribution #

18. Web Hosting #

  • p 411

19. Publishing Systems #

  • p 424

20. Redirection and Load Balancing #

  • p 448

21. Logging and Usage Tracking #

  • p 483

Read after #

dean-ghemawat-2008

Citation #

author={Jeffrey Dean and Sanjay Ghemawat}, title={MapReduce: Simplified Data Processing on Large Clusters}, journal={Association for Computing Machinery}, address = {New York, NY, USA}, volume = {51}, number = {1}, year={2008}, url={10.1145/1327452.1327492}

Summary #

(Dean & Ghemawat, 2008)

References #

Dean, J., & Ghemawat, S. (2008). Mapreduce: Simplified data processing on large clusters. Association for Computing Machinery, 51(1). 10.1145/1327452.1327492

douglass-2003

Citation #

  • Author: Bruce Powel Douglass
  • Date Published: 2003
  • Title: Real-Time Design Patterns: Robust Scalable Architecture for Real-Time Systems
  • Source (url, publisher name, or doi): Addison-Wesley

Foreword #

  • by Doug Jensen, Prof of CS at Carnegie Mellon for 8 years.
    • 30 years working in military and industrial real-time computing
    • Works at MITRE corporation which does research on real-time computing systems for strategic national interest
  • References lea-1994
  • “Software patterns and UML enable potentially lower software costs in many systems (xiii)”
  • Real-time systems span the entire range of complexity and costs
    • first case: hardware costs so much more than software costs, like software in laser gyroscope
    • second case: software is so large and complex that it dwarfs hardware costs, like military or commercial aircraft
  • “Historically, developers of real-time software have lagged behind many other developers in using the most contemporary software engineering methodologies. There are several reasons for this.”
    • “One is that some real-time software is so simple that the most elementary methodologies are needed”
    • “A more common reason is that many real-time systems with non-trivial software suffer from hardware capacity constraints (due to size, weight, power, and so on). Software structured for purposes such as re-usability, modularity, or flexibility does tend to consume additional time or space resources”
    • “Yet another reason is that real-time software practitioners are frequently application experts who are not always educated enough in modern software engineering to understand and employ it properly (xiv)”
  • Knowing patterns + UML allows us to build larger scale projects, more dynamic and complex, and more distributed real-time computing systems

Preface #

  • “Real time and embedded (RTE) systems must execute in a much more constrained environment than desktop computers (xvii)”
    • Must be highly efficient to optimized for limited processor and memory resources.
    • but must also outperform systems with significantly more compute power
    • RTE have safety-critical and high-reliability requirements
      • Avionics flight control, nuclear power plant control, life support and medical instrumentation.
  • The best developers with decades of experience encounter the same problems over and over.
    • These problems are abstracted and their solutions generalized into design patterns
  • This book focuses on practical development rather than theoretical

Part I: Design Pattern Basics #

  • UML is related to architecture.
  • Two types of architecture: logical and physical

Chapter 1: Introduction - review of UML #

Chapter 2: Architecture and the UML - defines ROPES #

Chapter 3: The Role of Design Patterns #

  • explains design patterns and their role in defining architecture
    • Introduces how design patterns could be effectively discussed in a software development process

Part II: Architectural Design Patterns #

Chapter 4: Subsystem and Component Architecture Patterns #

Chapter 5: Concurrency Patterns #

  • p 203

5.1 Introduction #

5.2 Concurrency Pattern #

5.3 Message Queuing Pattern #

5.4 Interrupt Pattern #

5.5 Guarded Call Pattern #

5.6 Rendezvous Pattern #

5.7 Cyclic Executive Pattern #

5.8 Round Robin Pattern #

5.9 Static Priority Pattern #

5.10 Dynamic Priority Pattern #

Chapter 6: Memory Patterns #

  • p 259

6.1 Memory Management Patterns #

  • p 260

6.2 Static Allocation Pattern: Allocate memory up front #

6.3 Pool Allocation Pattern: Preallocate pools of needed objects #

  • p 266

6.4 Fixed Sized Buffer Pattern: Allocates memory in same-sized blocks #

  • p 273

6.5 Smart Pointer Pattern: Makes pointers reliable #

  • p 278

6.6 Garbage Collection Pattern: Automatically reclaims lost memory #

  • p 286

6.7 Garbage Compactor Pattern: Automatically defragments and reclaims memory #

  • p 293

Chapter 7: Resource Patterns #

  • p 301

7.1 Introduction #

7.2 Critical Section Pattern #

  • p 308

7.3 Priority Inheritance Pattern #

  • p 314

7.4 Highest Locker Pattern #

  • p 323

7.5 Priority Ceiling Pattern #

  • p 330

7.6 Simultaneous Locking Pattern #

  • p 338

7.7 Ordered Locking Pattern #

  • p 345

Chapter 8: Distribution Patterns - distributed computing #

8.1 Introduction #

  • p 354

8.2 Shared Memory Pattern #

  • p 356

8.3 Remote Method Call Pattern #

  • p 362

8.4 Observer Pattern #

  • p 370

8.5 Data Bus Pattern #

  • p 377

8.6 Proxy Pattern #

  • p 387

8.7 Broker Pattern #

  • p 395

Chapter 9: Safety and Reliability Patterns #

9.1 Introduction #

  • p 405

9.1.1 Handling Faults #

9.2 Protected Single Channel Pattern #

  • p 409

9.3 Homogeneous Redundancy Pattern #

  • p 415

9.4 Triple Modular Redundancy Pattern #

  • p 421

9.5 Heterogeneous Redundancy Pattern #

  • p 426

9.6 Monitor-Actuator Pattern #

  • p 432

9.7 Sanity Check Pattern #

  • p 438

9.8 Watchdog Pattern #

  • p 443

9.9 Safety Executive Pattern #

  • p 450

christensen-1997

Citation #

  @book{christensen2015innovator,
  title={The innovator's dilemma: when new technologies cause great firms to fail},
  author={Christensen, Clayton M},
  year={1997},
  publisher={Harvard Business Review Press}
}

Examples of Disruption #

  • Blockbuster vs Netflix
  • Stairs vs Elevators/Escalators
  • Aircraft Autopilot + Human Pilots
  • Cars and Self-Driving Cars
  • automation. automated kitchens

Takeaways #

Technology
“means the processes by which an organization transforms labor, capital, materials, and information into products and services of greater value (Christensen 1997)”

apple is a UX company.

williams-2019

Citation #

  • Author: Anthony Williams
  • Date Published: 2019
  • Title: C++ Concurrency in Action, Second Edition
  • Source (url, publisher name, or doi): Manning

Ch 1 - Hello, world of concurrency in C++! #

1.1 What is concurrency? #

1.2 Why use concurrency? #

1.3 Concurrency and multithreading in C++ #

1.4 Getting started #

Ch 2 - Managing threads #

2.1 Basic thread management #

Launching a thread #

Waiting for a thread to complete #

Waiting in exceptional circumstances #

Running threads in the background #

2.2 Passing arguments to a thread function #

2.3 Transferring ownership of a thread #

2.4 Choosing the number of threads at runtime #

2.5 Identifying threads #

Ch 3 - Sharing data between threads #

3.1 Problems with sharing data between threads #

Race Conditions #

Avoiding problematic race conditions #

3.2 Protecting shared data with mutexes #

Using mutexes in C++ #

Structuring code for protecting shared data #

Spotting race conditions inherent in interfaces #

Deadlock: the problem and a solution #

Further guidelines for avoiding deadlock #

Flexible locking with std::unique_lock #

Transferring mutex ownership between scopes #

Locking at an appropriate granularity #

3.3 Alternative facilities for protecting shared data #

Protecting shared data during initialization #

Protecting rarely updated data structures #

Recursive locking #

Ch 4 - Synchronizing concurrent operations #

4.1 - Waiting for an event or other condition #

Waiting for a condition with condition variables #

Building a thread-safe queue with condition variables #

4.2 - Waiting for one-off events with futures #

Returning values from background tasks #

Associating a task with a future #

Making (std::)promises #

Saving an exception for the future #

Waiting from multiple threads #

4.3 - Waiting with a time limit #

Clocks #

Durations #

Time points #

Functions that accept timeouts #

4.4 - Using synchronization of operations to simplify code #

Functional programming with futures #

Synchronizing operations with message passing #

Continuation-style concurrency with the Concurrency TS #

Chaining continuations #

Waiting for more than one future #

Waiting for the first future with a set with when_any #

Latches and barriers in the Concurrency TS #

A basic latch type: std::experimental::latch #

std::experimental::barrier: a basic barrier #

std::experimental::flex_barrier - std::experimental::barrier’s flexible friend #

Ch 5 - The C++ memory model and operations on atomic types #

5.1 - Memory model basics #

Objects and memory locations #

Objects, memory locations, and concurrency #

Modification orders #

5.2 - Atomic operations and types in C++ #

The standard atomic types #

Operations on std::atomic_flag #

Operations on std::atomic<bool> #

Operations on std::atomic<T*>: pointer arithmetic #

Operations on standard atomic integral types #

The std::atomic<> primary class template #

Free functions for atomic operations #

5.3 - Synchronizing operations and enforcing ordering #

The synchronizes-with relationship #

The happens-before relationship #

Memory ordering for atomic operations #

Release sequences and synchronizes-with #

Fences #

Ordering non-atomic operations with atomics #

Ordering non-atomic operations #

Ch 6 - Designing lock-based concurrent data structures #

6.1 - What does it mean to design for concurrency? #

Guidelines for designing data structures for concurrency #

6.2 - Lock-based concurrent data structures #

A thread-safe stack using locks #

A thread-safe queue using locks and condition variables #

A thread-safe queue using find-grained locks and condition variables #

6.3 - Designing more complex lock-based data structures #

Writing a thread-safe lookup table using locks #

Writing a thread-safe list using locks #

Ch 7 - Designing lock-free concurrent data structures #

7.1 - Definitions and consequences #

Types of non-blocking data structures #

Lock-free data structures #

Wait-free data structures #

The pros and cons of lock-free data structures #

7.2 - Examples of lock-free data structures #

Writing a thread-safe stack without locks #

Stopping those pesky leaks: managing memory in lock-free data structures #

Detecting nodes in use with reference counting #

Applying the memory model to the lock-free stack #

Writing a thread-safe queue without locks #

7.3 - Guidelines for writing lock-free data structures #

Guideline: use std::memory_order_seq_cst for prototyping #

Guideline: use a lock-free memory reclamation scheme #

Guideline: watch out for the ABA problem #

Guideline: identify busy-wait loops and help the other thread #

Ch 8 - Designing concurrent code #

8.1 - Techniques for dividing work between threads #

Dividing data between threads before processing begins #

Dividing data recursively #

Dividing work by task type #

8.2 - Factors affecting the performance for concurrent code #

How many processors? #

Data contention and cache ping-pong #

False sharing #

How close is your data? #

Over-subscription and excessive task switching #

8.3 - Designing data structures for multi-threaded performance #

Dividing array elements for complex operations #

Data access patterns in other data structures #

8.4 - Additional considerations when designing for concurrency #

Exception safety in parallel algorithms #

Scalability and Amdahl’s law #

Hiding latency with multiple threads #

Improving responsiveness with concurrency #

8.5 - Designing concurrent code in practice #

A parallel implementation of std::for_each #

A parallel implementation of std::find #

A parallel implementation of std::partial_sum #

Ch 9 - Advanced thread management #

9.1 - Thread pools #

The simplest possible thread pool #

Waiting for tasks submitted to a thread pool #

Tasks that wait for other tasks #

Avoid contention on the work queue #

Work stealing #

9.2 - Interrupting threads #

Launching and interrupting another thread #

Detecting that a thread has been interrupted #

Interrupting a condition variable wait #

Interrupting a wait on std::condition_variable_any #

Interrupting other blocking calls #

Handling interruptions #

Interrupting background tasks on application exit #

Ch 10 - Parallel algorithms #

10.1 - Parallelizing the standard library algorithms #

10.2 - Execution policies #

General effects of specifying an execution policy #

std::execution::sequenced_policy #

std::execution::parallel_policy #

std::execution::parallel_unsequenced_policy #

10.3 - The parallel algorithms from the C++ Standard Library #

Examples of using parallel algorithms #

Counting visits #

Ch 11 - Testing and debugging multi-threaded applications #

Unwanted blocking #

Race conditions #

Reviewing code to locate potential bugs #

Designing for test-ability #

Multi-threaded testing techniques #

Structuring multi-threaded test code #

Testing the performance of multi-threaded code #