Multi-threaded concepts are important: e.g., atomics, locks, issues with different designs, how to make things thread safe. Cache locality is another huge thing these days. Asynchronous architectures and callbacks are what you will be dealing with every day.

• What is cache locality?
• How do multicore systems ensure their caches are in sync?
• How do you get around this problem?
• Why are signals slow and why is context switching bad?
• What exactly happens during a context switch?

“A computer program or subroutine is called reentrant if multiple invocations can safely run concurrently on a single processor system.”

“Race condition: an unfortunate order of execution causes undesirable behaviour.”

Starting in C++11, scoped static initialization is now thread-safe, but it comes with a cost: Reentrancy now invokes undefined behavior.

# Issues

• Ensure critical sections are as small as possible
• Only a problem for multiple writers – multiple readers OK
• Too few threads: algorithm is sub-optimal
• Too many threads: overhead of creating/managing and partitioning the data is greater than processing advantage; software threads outnumber the available hardware threads and the OS must intervene
• Data races, deadlocks and livelocks – unsynchonised access to shared memory can introduce race conditions and undefined behaviour (program results depend non-deterministically on the relative timings of two or more threads)

## DCLP

• Livelocks

### Prevention

• Try to avoid calling out to external code while holding a lock.
• Try to avoid holding locks for longer than you need to.
• If you ever need to acquire two locks at once, document the ordering thoroughly and make sure you always use the same order.
• Immutability is great for multi-threading: immutable means thread safety. Functional programming works well concurrently partly due to the emphasis on immutability.

# Mutex

See std::mutex or std::atomic in C++.