You are here: Home > PLC C++ > PLC Chapter X – Concurrency

PLC Chapter X – Concurrency


  • Processor speed has slowed and we can use transistors from Moore’s law for parallelism to increase speed.
  • Concurrent programming is necessary to utilize parallel hardware.
  • Sometimes it is natural to describe a problem with multi-threads just like divide a problem into several steps.


Original C++ Standard was published in 1998 only support single thread programming.

The new C++ Standard (referred to as C++11 or C++0x) was published in 2011. It will acknowledge the existence of multithreaded programs. Memory models for concurrency is also introduced


Dividing a problem into multiple executing threads is an important programming technique.

Multiple executing threads may be the best way to describe a problem.

With multiple executing threads, we can write highly efficient program by taking advantage of any parallelism available in computer system.


Thread creation – be able to create another thread of control.

Thread synchronization – be able to establish timing relationships among threads.

One thread waits until another thread has reached a certain point in its code.

One threads is ready to transmit information while the other is ready to receive the message, simultaneously.

Thread communication – be able to correctly transmit data among threads.


C++11 introduced a new thread library including utilities for starting and managing threads.

Creating an instance of std::thread will automatically start a new thread.

Two thread will be created. The main thread will launch a new thread when it encounter the code std::thread th( ) to executive the function threadFucntion();

The join function here is to force the current thread to wait for the thread to th finish. Otherwise the main function may exit without the thread th finished.


Data are usually shared between threads. There is a problem when multiple threads attempting to operate on the same object simultaneously.

If the operation is atomic(not divisible) which means no other thread can modify any partial results during the operation on the object, then it is safe. Otherwise, we are in a race condition. a critical section is a piece of code that accesses a shared resource (data structure or device) that must not be concurrently accessed by more than one thread of execution

Preventing simultaneous execution of critical section by multiple thread is called mutual exclusion.


Shared objects between threads will lead synchronization issues. For example

5 threads created try to increase the counter 5000 times. This program has a synchronization problem. Here are some result obtained on my computer:






It is not the same every time.


C++11 concurrency library introduces atomic types as a template class: std::atomic. You can use any type you want with the template and the operation on that variable will be atomic and so thread-safe.

std::atomic<Type> object.

Different locking technique is applied according to the data type and size.

lock-free technique: integral types like int, long, float. It is much faster than mutexes technique.

Mutexes technique: for big type(such as 2MB storage). There is no performance advantage for atomic type over mutexes.


Except for protecting shared data, we also need to synchronization action on separate threads.

In C++ Standard Library, conditional variables and futures are provided to handle synchronization problems.

The condition_variable class is a synchronization primitive that can be used to block a thread, or multiple threads at the same time, until:

a notification is received from another thread

a timeout expires

Any thread that intends to wait on std::condition_variable has to acquire a std::unique_lock first. The wait operations atomically release the mutex and suspend the execution of the thread. When the condition variable is notified, the thread is awakened, and the mutex is reacquired.


The condition variables require std::unique_lock rather than the std::lock_quard — the waiting thread must unlock the mutex while it is waiting , the lock it again afterwards and the std::lock_guard does not provide such flexibility.

The flexibility to unlock a std::unique_lock is not just used for the call to wait() , it is also used once we’ve got the data to process, but before processing it (#6): processing data can potentially be a time-consuming operation, and as we saw in chapter 3, it is a bad idea to hold a lock on a mutex for longer than necessary.


If a thread needs to wait for a specific one-off event, then it obtains a future representing this event. The thread can poll the future to see if the event has occurred while performing some other task.

Two sorts of futures templates in C++ Standard Library.

std::unique_furture<> — the instance is the only one that refers to its associated event.

std::shared_future<> — multiple instances of it may refer to the same event. All the instance become ready at the same time, and they may all access any data associated with the event.


Two aspects to the memory model:

the basic structural aspects — how things are laid out in memory

Every variable is an object, including those that are members of other objects.

Every object occupies at least one memory location.

Variables of fundamental type such as int or char are exactly one memory location, whatever their size, even if they’re adjacent or part of an array.

Adjacent bit fields are part of the same memory location.

The concurrency aspects

If there is no enforced ordering between two accesses to a single memory location from separate threads, these accesses is not atomic,

if one or both accesses is a write, this is a data race, and causes undefined behaviour.


Each of the operations on atomic types has an optional memory-ordering argument that be used to specify the required memory-ordering semantics. These operations can be divided into three categories:

Store operation, which can have memory_order_relaxed, memorey_order_release or memory_order_seq_cst ordering

Load operations, which can have memory_order_relaxed, memory_order_consume, memory_order_acquire, or memory_order_seq_cst ordering

Read-modify-write operations, which can have memory_order_relaxed, memory_order_consume, memory_order_acquire, memory_order_release, memory_order_acq_rel, or memory_order_seq_cst ordering


C++ committee introduced the concurrency into C++0X and C++11 make it support multi-threads which make C++ a adaptive to the current programming style.

To make concurrency possible, language should support threads launch, threads synchronization and threads communication.

The basic problem in concurrency is how to protected shared data and synchronize threads. Mutexes and atomic template are the solution to race conditions.

Lower level control based on memory models allow us to clear the semantic meaning in concurrency operation

Tags: , ,

  • Digg
  • StumbleUpon
  • Reddit
  • Twitter
  • RSS

Leave a Reply