macros are preprocessor directives, and they get processed before the actual compilation phase. One of the most common preprocessor directives is #define which is used to define macros.
If you want to change a macro definition at compile time, there are several ways to do it:
Using Compiler Flags: You can use the -D flag (for most compilers like GCC and Clang) to define macros.
g++ your_file.cpp -o output -DMY_MACRO='"Compile Time Value"'
When you run the output, it will print “Compile Time Value”.
Using Conditional Compilation: This is where you use #ifdef, #ifndef, #else, and #endif directives to conditionally compile parts of your code based on whether a certain macro is defined or not.
intmain(){ type = HashMapVariantType::_int; dispatch(); type = HashMapVariantType::_long; dispatch(); type = HashMapVariantType::_double; dispatch(); return0; }
using Container = std::vector<int32_t>; using ContainerPtr = std::shared_ptr<Container>;
voidappend_by_const_reference_shared_ptr(const ContainerPtr& container, constint num){ // can calling non-const member function container->push_back(num); }
voidappend_by_const_reference(const Container& container, constint num){ // cannot calling non-const member function // container.push_back(num); }
voidappend_by_bottom_const_pointer(const Container* container, constint num){ // cannot calling non-const member function // container->push_back(num); }
voidappend_by_top_const_pointer(Container* const container, constint num){ // can calling non-const member function container->push_back(num); }
// Compile error // Requested alignment is less than minimum int alignment of 4 for type 'Foo2' // struct alignas(1) Foo2 { // char c; // int32_t i32; // };
// Compile error // Requested alignment is less than minimum int alignment of 4 for type 'Foo3' // struct alignas(2) Foo3 { // char c; // int32_t i32; // };
In C++, storage classes determine the scope, visibility, and lifetime of variables. There are four storage classes in C++:
Automatic Storage Class (default): Variables declared within a block or function without specifying a storage class are considered to have automatic storage class. These variables are created when the block or function is entered and destroyed when the block or function is exited. The keyword “auto” can also be used explicitly, although it is optional.
Static Storage Class: Variables with static storage class are created and initialized only once, and their values persist across function calls. They are initialized to zero by default. Static variables can be declared within a block or function, but their scope is limited to that block or function. The keyword “static” is used to specify static storage class.
Register Storage Class (deprecated): The register storage class is used to suggest that a variable be stored in a register instead of memory. The keyword “register” is used to specify register storage class. However, the compiler is free to ignore this suggestion.
Extern Storage Class: The extern storage class is used to declare a variable that is defined in another translation unit (source file). It is often used to provide a global variable declaration that can be accessed from multiple files. When using extern, the variable is not allocated any storage, as it is assumed to be defined elsewhere. The keyword “extern” is used to specify extern storage class.
Here’s an example illustrating the usage of different storage classes:
Function Templates: These are templates that produce templated functions that can operate on a variety of data types.
1 2 3 4
template<typename T> T max(T a, T b){ return (a > b) ? a : b; }
Class Templates: These produce templated classes. The Standard Template Library (STL) makes heavy use of this type of template for classes like std::vector, std::map, etc.
1 2 3 4
template<typename T> classStack { // ... class definition ... };
Variable Templates: Introduced in C++14, these are templates that produce templated variables.
1 2
template<typename T> constexpr T pi = T(3.1415926535897932385);
Alias Templates: These are a way to define templated typedef, providing a way to simplify complex type names.
1 2
template<typename T> using Vec = std::vector<T, std::allocator<T>>;
Member Function Templates: These are member functions within classes that are templated. The containing class itself may or may not be templated.
Template Template Parameters: This advanced feature allows a template to have another template as a parameter.
1 2 3 4
template<template<typename> classContainerType> classMyClass { // ... class definition ... };
Non-type Template Parameters: These are templates that take values (like integers, pointers, etc.) as parameters rather than types.
1 2 3 4 5
template<int size> classArray { int elems[size]; // ... class definition ... };
Nested Templates: This refers to templates defined within another template. It’s not a different kind of template per se, but rather a feature where one template can be nested inside another.
Function and Class Templates: When you define a function template or a class template in a header, you’re not defining an actual function or class. Instead, you’re defining a blueprint from which actual functions or classes can be instantiated. Actual instantiations of these templates (the generated functions or classes) may end up in multiple translation units, but they’re identical and thus don’t violate the ODR. Only when these templates are instantiated do they become tangible entities in the object file. If multiple translation units include the same function or class template and instantiate it in the same way, they all will have the same instantiation, so it doesn’t break One Definition Rule (ODR).
Variable Templates: A variable template is still a blueprint, like function and class templates. But the key difference lies in how the compiler treats template instantiations for variables versus functions/classes. For variables, the instantiation actually defines a variable. If this template is instantiated in multiple translation units, it results in multiple definitions of the same variable across those translation units, violating the ODR. Thus, for variable templates, the inline keyword is used to ensure that all instances of a variable template across multiple translation units are treated as a single entity, avoiding ODR violations.
// Base class has a pure virtual function for cloning classAbstractShape { public: virtual ~AbstractShape() = default; virtual std::unique_ptr<AbstractShape> clone()const= 0; };
// This CRTP class implements clone() for Derived template <typename Derived> classShape : public AbstractShape { public: std::unique_ptr<AbstractShape> clone()constoverride{ return std::make_unique<Derived>(static_cast<Derived const&>(*this)); }
protected: // We make clear Shape class needs to be inherited Shape() = default; Shape(const Shape&) = default; Shape(Shape&&) = default; };
// Every derived class inherits from CRTP class instead of abstract class classSquare : public Shape<Square> {};
classCircle : public Shape<Circle> {};
intmain(){ Square s; auto clone = s.clone(); return0; }
5.16 PIMPL
In C++, the term pimpl is short for pointer to implementation or private implementation. It’s an idiom used to separate the public interface of a class from its implementation details. This helps improve code modularity, encapsulation, and reduces compile-time dependencies.
Here’s how the pimpl idiom works:
Public Interface: You define a class in your header file (.h or .hpp) that contains only the public interface members (public functions, typedefs, etc.). This header file should include minimal implementation details to keep the interface clean and focused.
Private Implementation: In the implementation file (.cpp), you declare a private class that holds the actual implementation details of your class. This private class is typically defined within an anonymous namespace or as a private nested class of the original class. The private class contains private data members, private functions, and any other implementation-specific details.
Pointer to Implementation: Within the main class, you include a pointer to the private implementation class. The public functions in the main class forward calls to the corresponding functions in the private implementation class.
By using the pimpl idiom, you achieve several benefits:
Reduces compile-time dependencies: Changes to the private implementation do not require recompilation of the public interface, reducing compilation times.
Enhances encapsulation: Clients of the class only need to know about the public interface, shielding them from implementation details.
Minimizes header dependencies: Since the private implementation is not exposed in the header, you avoid leaking implementation details to client code.
Eases binary compatibility: Changing the private implementation does not require recompiling or re-linking client code, as long as the public interface remains unchanged.
Cache coherence and memory consistency are two fundamental concepts in parallel computing systems, but they address different issues:
Cache Coherence:
This concept is primarily concerned with the values of copies of a single memory location that are cached at several caches (typically, in a multiprocessor system). When multiple processors with separate caches are in a system, it’s possible for those caches to hold copies of the same memory location. Cache coherence ensures that all processors in the system observe a single, consistent value for the memory location. It focuses on maintaining a global order in which writes to each individual memory location occur.
For example, suppose we have two processors P1 and P2, each with its own cache. If P1 changes the value of a memory location X that’s also stored in P2’s cache, the cache coherence protocols will ensure that P2 sees the updated value if it tries to read X.
Memory Consistency:
While cache coherence is concerned with the view of a single memory location, memory consistency is concerned about the ordering of multiply updates to different memory locations(or single memory location) from different processors. It determines when a write by one processor to a shared memory location becomes visible to all other processors.
A memory consistency model defines the architecturally visible behavior of a memory system. Different consistency models make different guarantees about the order and visibility of memory operations across different threads or processors. For example, sequential consistency, a strict type of memory consistency model, says that all memory operations must appear to execute in some sequential order that’s consistent with the program order of each individual processor.
In summary, while both are essential for correctness in multiprocessor systems, cache coherence deals with maintaining a consistent view of a single memory location, while memory consistency is concerned with the order and visibility of updates to different memory locations.
6.1.2 Happens-before
If an operation A “happens-before” another operation B, it means that A is guaranteed to be observed by B. In other words, any data or side effects produced by A will be visible to B when it executes.
6.2 Memory consistency model
6.2.1 Sequential consistency model
the result of any execution is the same as if the operations of all the processors were executed in some sequential order, and the operations of each individual processor appear in this sequence in the order specified by its program
Sequential consistency model (SC), also known as the sequential consistency model, essentially stipulates two things:
Each thread’s instructions are executed in the order specified by the program (from the perspective of a single thread)
The interleaving order of thread execution can be arbitrary, but the overall execution order of the entire program, as observed by all threads, must be the same (from the perspective of the entire program)
That is, there should not be a situation where for write operations W1 and W2, processor 1 sees the order as: W1 -> W2; while processor 2 sees the order as: W2 -> W1
6.2.2 Relaxed consistency model
Relaxed consistency model also known as the loose memory consistency model, is characterized by:
Within the same thread, access to the same atomic variable cannot be reordered (from the perspective of a single thread)
Apart from ensuring the atomicity of operations, there is no stipulation on the order of preceding and subsequent instructions, and the order in which other threads observe data changes may also be different (from the perspective of the entire program)
That is, different threads may observe the relaxed operations on a single atomic value in different orders.
Looseness can be measured along the following two dimensions:
How to relax the requirements of program order. Typically, this refers to the read and write operations of different variables; for the same variable, read and write operations cannot be reordered. Program order requirements include:
read-read
read-write
write-read
write-write
How they relax the requirements for write atomicity. Models are differentiated based on whether they allow a read operation to return the written value of another processor before all cache copies have received the invalidation or update message produced by the write; in other words, allowing a processor to read the written value before the write is visible to all other processors.
Through these two dimensions, the following relaxed strategies have been introduced:
Relaxing the write-read program order. Supported by TSO (Total Store Order)
Relaxing the write-write program order
Relaxing the read-read and read-write program order
Allowing early reads of values written by other processors
Allowing early reads of values written by the current processor
6.2.3 Total Store Order
otal Store Order (TSO) is a type of memory consistency model used in computer architecture to manage how memory operations (reads and writes) are ordered and observed by different parts of the system.
In a Total Store Order model:
Writes are not immediately visible to all processors: When a processor writes to memory, that write is not instantly visible to all other processors. There’s a delay because writes are first written to a store buffer unique to each processor.
Writes are seen in order: Even though there’s a delay in visibility, writes to the memory are seen by all processors in the same order. This is the “total order” part of TSO, which means that if Processor A sees Write X followed by Write Y, Processor B will also see Write X before Write Y.
Reads may bypass writes: If a processor reads a location that it has just written to, it may get the value from its store buffer (the most recent write) rather than the value that is currently in memory. This means a processor can see its writes immediately but may not see writes from other processors that happened after its own write.
Writes from a single processor are seen in the order issued: Writes by a single processor are observed in the order they were issued by that processor. If Processor A writes to memory location X and then to memory location Y, all processors will see the write to X happen before the write to Y.
This model is a compromise between strict ordering and performance. In a system that enforces strict ordering (like Sequential Consistency), every operation appears to happen in a strict sequence, which can be quite slow. TSO allows some operations to be reordered (like reads happening before a write is visible to all) for better performance while still maintaining a predictable order for writes, which is critical for correctness in many concurrent algorithms.
TSO is commonly used in x86 processors, which strikes a balance between the predictable behavior needed for programming ease and the relaxed rules that allow for high performance in practice.
6.3 std::memory_order
std::memory_order_seq_cst: Provide happens-before relationship.
std::memory_order_relaxed: CAN NOT Provide happens-before relationship. Which specific relaxation strategies are adopted must be determined based on the hardware platform.
When you use std::memory_order_relaxed, it guarantees the following:
Sequential consistency for atomic operations on a single variable: If you perform multiple atomic operations on the same atomic variable using std::memory_order_relaxed, the result will be as if those operations were executed in some sequential order. This means that the final value observed by any thread will be a valid result based on the ordering of the operations.
Coherence: All threads will eventually observe the most recent value written to an atomic variable. However, the timing of when each thread observes the value may differ due to the relaxed ordering.
Atomicity: Atomic operations performed with std::memory_order_relaxed are indivisible. They are guaranteed to be performed without interruption or interference from other threads.
std::memory_order_acquire and std::memory_order_release: Provide happens-before relationship.
When used together, std::memory_order_acquire and std::memory_order_release can establish a happens-before relationship between threads, allowing for proper synchronization and communication between them
std::memory_order_acquire is a memory ordering constraint that provides acquire semantics. It ensures that any memory operations that occur before the acquire operation in the program order will be visible to the thread performing the acquire operation.
std::memory_order_release is a memory ordering constraint that provides release semantics. It ensures that any memory operations that occur after the release operation in the program order will be visible to other threads that perform subsequent acquire operations.
template <std::memory_order read_order, std::memory_order write_order> voidtest_atomic_happens_before(){ auto reader_thread = []() { for (auto i = 0; i < TIMES; i++) { // atomic read while (!atomic_data_ready.load(read_order)) ;
// normal read: atomic read happens-before normal read assert(data == EXPECTED_VALUE);
data = INVALID_VALUE; atomic_data_ready.store(false, write_order); } }; auto writer_thread = []() { for (auto i = 0; i < TIMES; i++) { while (atomic_data_ready.load(read_order)) ;
template <std::memory_order read_order, std::memory_order write_order> booltest_reorder(){ // control vars std::atomic<bool> control(false); std::atomic<bool> stop(false); std::atomic<bool> success(true); std::atomic<int32_t> finished_num = 0;
auto round_process = [&control, &stop, &finished_num](auto&& process) { while (!stop) { // make t1 and t2 go through synchronously finished_num++; while (!stop && !control) ;
process();
// wait for next round finished_num++; while (!stop && control) ; } };
auto control_process = [&control, &success, &finished_num](auto&& clean_process, auto&& check_process) { for (size_t i = 0; i < TIMES; i++) { // wait t1 and t2 at the top of the loop while (finished_num != 2) ;
// clean up data finished_num = 0; clean_process();
// let t1 and t2 go start control = true;
// wait t1 and t2 finishing write operation while (finished_num != 2) ;
// check assumption if (!check_process()) { success = false; }
finished_num = 0; control = false; } };
// main vars std::atomic<int32_t> flag1, flag2; std::atomic<int32_t> critical_num;
test std::memory_order_seq_cst, std::memory_order_seq_cst, res=true test std::memory_order_acquire, std::memory_order_release, res=false test std::memory_order_relaxed, std::memory_order_relaxed, res=false
template <std::memory_order read_order, std::memory_order write_order> booltest_reorder(){ // control vars std::atomic<bool> control(false); std::atomic<bool> stop(false); std::atomic<bool> success(true); std::atomic<int32_t> finished_num = 0;
auto round_process = [&control, &stop, &finished_num](auto&& process) { while (!stop) { // make t1 and t2 go through synchronously finished_num++; while (!stop && !control) ;
process();
// wait for next round finished_num++; while (!stop && control) ; } };
auto control_process = [&control, &success, &finished_num](auto&& clean_process, auto&& check_process) { for (size_t i = 0; i < TIMES; i++) { // wait t1 and t2 at the top of the loop while (finished_num != 2) ;
// clean up data finished_num = 0; clean_process();
// let t1 and t2 go start control = true;
// wait t1 and t2 finishing write operation while (finished_num != 2) ;
// check assumption if (!check_process()) { success = false; }
finished_num = 0; control = false; } };
// main vars std::atomic<int32_t> data; std::atomic<int32_t> head; std::atomic<int32_t> read_val;
auto process_1 = [&data, &head]() { data.store(2000, write_order); head.store(1, write_order); }; auto process_2 = [&data, &head, &read_val]() { while (head.load(read_order) == 0) ; read_val = data.load(read_order); }; auto clean_process = [&data, &head, &read_val]() { data = 0; head = 0; read_val = 0; }; auto check_process = [&read_val]() { return read_val == 2000; };
test std::memory_order_seq_cst, std::memory_order_seq_cst, res=true test std::memory_order_acquire, std::memory_order_release, res=true test std::memory_order_relaxed, std::memory_order_relaxed, res=true
template <std::memory_order read_order, std::memory_order write_order> booltest_reorder(){ // control vars std::atomic<bool> control(false); std::atomic<bool> stop(false); std::atomic<bool> success(true); std::atomic<int32_t> finished_num = 0;
auto round_process = [&control, &stop, &finished_num](auto&& process) { while (!stop) { // make t1 and t2 go through synchronously finished_num++; while (!stop && !control) ;
process();
// wait for next round finished_num++; while (!stop && control) ; } };
auto control_process = [&control, &success, &finished_num](auto&& clean_process, auto&& check_process) { for (size_t i = 0; i < TIMES; i++) { // wait t1, t2 and t3 at the top of the loop while (finished_num != 3) ;
// clean up data finished_num = 0; clean_process();
// let t1, t2 and t3 go start control = true;
// wait t1, t2 and t3 finishing write operation while (finished_num != 3) ;
// check assumption if (!check_process()) { success = false; }
finished_num = 0; control = false; } };
// main vars std::atomic<int32_t> a; std::atomic<int32_t> b; std::atomic<int32_t> reg;
auto process_1 = [&a]() { a.store(1, write_order); }; auto process_2 = [&a, &b]() { if (a.load(read_order) == 1) { b.store(1, write_order); } }; auto process_3 = [&a, &b, ®]() { if (b.load(read_order) == 1) { reg.store(a.load(read_order), write_order); } }; auto clean_process = [&a, &b, ®]() { a = 0; b = 0; reg = -1; }; auto check_process = [®]() { return reg != 0; };
test std::memory_order_seq_cst, std::memory_order_seq_cst, res=true test std::memory_order_acquire, std::memory_order_release, res=true test std::memory_order_relaxed, std::memory_order_relaxed, res=true
The lambda expression is a prvalue expression of unique unnamed non-union non-aggregate class type, known as closure type, which is declared (for the purposes of ADL) in the smallest block scope, class scope, or namespace scope that contains the lambda expression. The closure type has the following members, they cannot be explicitly instantiated, explicitly specialized, or (since C++14) named in a friend declaration
// Must use reference to capture itself recursiveLambda = [&recursiveLambda](int x) { std::cout << x << std::endl; if (x > 0) recursiveLambda(x - 1); };
A coroutine is a generalization of a function that can be exited and later resumed at specific points. The key difference from functions is that coroutines can maintain state between suspensions.
co_yield: Produces a value and suspends the coroutine. The coroutine can be later resumed from this point.
co_return: Ends the coroutine, potentially returning a final value.
co_await: Suspends the coroutine until the awaited expression is ready, at which point the coroutine is resumed.
A coroutine consists of:
A wrapper type
A type with the exact name promise_type inside the return type of coroutine(the wrapper type), this type can be:
Type alias
A typedef
Directly declare an inner class
An awaitable type that comes into play once we use co_await
An interator
Key Observation: A coroutine in C++ is an finite state machine(FSM) that can be controlled and customized by the promise_type
Coroutine Classifications:
Task: A coroutine that does a job without returning a value.
Generator: A coroutine that does a job and returns a value(either by co_return or co_yield)
8.1 Overview of promise_type
The promise_type for coroutines in C++20 can have several member functions which the coroutine machinery recognizes and calls at specific times or events. Here’s a general overview of the structure and potential member functions:
Stored Values or State: These are member variables to hold state, intermediate results, or final values. The nature of these depends on the intended use of your coroutine.
Coroutine Creation:
auto get_return_object() -> CoroutineReturnObject: Defines how to obtain the return object of the coroutine (what the caller of the coroutine gets when invoking the coroutine).
Coroutine Lifecycle:
std::suspend_always/std::suspend_never initial_suspend() noexcept: Dictates if the coroutine should start executing immediately or be suspended right after its creation.
std::suspend_always/std::suspend_never final_suspend() noexcept: Dictates if the coroutine should be suspended after running to completion. If std::suspend_never is used, the coroutine ends immediately after execution.
void return_void() noexcept: Used for coroutines with a void return type. Indicates the end of the coroutine.
void return_value(ReturnType value): For coroutines that produce a result, this function specifies how to handle the value provided with co_return.
void unhandled_exception(): Invoked if there’s an unhandled exception inside the coroutine. Typically, you’d capture or rethrow the exception here.
Yielding Values:
std::suspend_always/std::suspend_never yield_value(YieldType value): Specifies what to do when the coroutine uses co_yield. You dictate here how the value should be handled or stored.
Awaiting Values:
auto await_transform(AwaitableType value) -> Awaiter: Transforms the expression after co_await. This is useful for custom awaitable types. For instance, it’s used to make this a valid awaitable in member functions.
8.1.1 Awaiter
The awaiter in the C++ coroutine framework is a mechanism that allows fine-tuned control over how asynchronous operations are managed and how results are produced once those operations are complete.
Here’s an overview of the awaiter:
Role of the Awaiter:
The awaiter is responsible for defining the behavior of a co_await expression. It determines if the coroutine should suspend, what should be done upon suspension, and what value (if any) should be produced when the coroutine resumes.
Required Methods: The awaiter must provide the following three methods:
await_ready
Purpose: Determines if the coroutine needs to suspend at all.
Signature: bool await_ready() const noexcept
Return:
true: The awaited operation is already complete, and the coroutine shouldn’t suspend.
false: The coroutine should suspend.
await_suspend
Purpose: Dictates the actions that should be taken when the coroutine suspends.
handle: A handle to the currently executing coroutine. It can be used to later resume the coroutine.
await_resume
Purpose: Produces a value once the awaited operation completes and the coroutine resumes.
Signature: ReturnType await_resume() noexcept
Return: The result of the co_await expression. The type can be void if no value needs to be produced.
Workflow of the Awaiter:
Obtain the Awaiter: When a coroutine encounters co_await someExpression, it first needs to get an awaiter. The awaiter can be:
Directly from someExpression if it has an operator co_await.
Through an ADL (Argument Dependent Lookup) free function named operator co_await that takes someExpression as a parameter.
From the coroutine’s promise_type via await_transform if neither of the above methods produce an awaiter.
Call await_ready: The coroutine calls the awaiter’s await_ready() method.
If it returns true, the coroutine continues without suspending.
If it returns false, the coroutine prepares to suspend.
Call await_suspend (if needed): If await_ready indicated the coroutine should suspend (by returning false), the await_suspend method is called with a handle to the current coroutine. This method typically arranges for the coroutine to be resumed later, often by setting up callbacks or handlers associated with the asynchronous operation.
Operation Completion and Coroutine Resumption: Once the awaited operation is complete and the coroutine is resumed, the awaiter’s await_resume method is called. The value it produces becomes the result of the co_await expression.
Built-in Awaiters:
std::suspend_always: The method await_ready always returns false, indicating that an await expression always suspends as it waits for its value
std::suspend_never: The method await_ready always returns true, indicating that an await expression never suspends
8.2 Example
The Chat struct acts as a wrapper around the coroutine handle. It allows the main code to interact with the coroutine - by resuming it, or by sending/receiving data to/from it.
The promise_type nested within Chat is what gives behavior to our coroutine. It defines:
What happens when you start the coroutine (initial_suspend).
What happens when you co_yield a value (yield_value).
What happens when you co_await a value (await_transform).
What happens when you co_return a value (return_value).
What happens at the end of the coroutine (final_suspend).
Functionality:
Creating the Coroutine:
When Fun() is called, a new coroutine is started. Due to initial_suspend, it is suspended immediately before executing any code.
The coroutine handle (with the promise) is wrapped inside the Chat object, which is then returned to the caller (main function in this case).
Interacting with the Coroutine:
chat.listen(): Resumes the coroutine until the next suspension point. If co_yield is used inside the coroutine, the yielded value will be returned.
chat.answer(msg): Sends a message to the coroutine. If the coroutine is waiting for input using co_await, this will provide the awaited value and resume the coroutine.
Coroutine Flow:
The coroutine starts and immediately hits co_yield "Hello!\n";. This suspends the coroutine and the string "Hello!\n" is made available to the caller.
In main, after chat.listen(), it prints this message.
Then, chat.answer("Where are you?\n"); is called. Inside the coroutine, the message "Where are you?\n" is captured and printed because of the line std::cout << co_await std::string{};.
Finally, co_return "Here!\n"; ends the coroutine, and the string "Here!\n" is made available to the caller. This message is printed after the second chat.listen() in main.
// A: Shortcut for the handle type using Handle = std::coroutine_handle<promise_type>; // B Handle _handle;
// C: Get the handle from promise explicitChat(promise_type* p) : _handle(Handle::from_promise(*p)) {}
// D: Move only Chat(Chat&& rhs) : _handle(std::exchange(rhs._handle, nullptr)) {}
// E: Care taking, destroying the handle if needed ~Chat() { if (_handle) { _handle.destroy(); } }
// F: Active the coroutine and wait for data std::string listen(){ std::cout << " -- Chat::listen" << std::endl; if (!_handle.done()) { _handle.resume(); } return std::move(_handle.promise()._msg_out); }
// G Send data to the coroutine and activate it voidanswer(std::string msg){ std::cout << " -- Chat::answer" << std::endl; _handle.promise()._msg_in = msg; if (!_handle.done()) { _handle.resume(); } } };
intmain(){ std::vector<Foo> v; // Avoid scale up v.reserve(3);
std::cout << "\npush_back without std::move" << std::endl; // This move operation is possible because the object returned by getFoo() is an rvalue, which is eligible for move semantics. v.push_back(getFoo());
std::cout << "\npush_back with std::move (1)" << std::endl; v.push_back(std::move(getFoo()));
push_back without std::move Foo::Foo() Foo::Foo(Foo&&)
push_back with std::move (1) Foo::Foo() Foo::Foo(Foo&&)
push_back with std::move (2) Foo::Foo() Foo::Foo(Foo&&)
assign without std::move Foo::Foo() Foo::Foo() Foo::operator=&&
assign with std::move Foo::Foo() Foo::operator=&&
11.2 Structured Bindings
Structured bindings were introduced in C++17 and provide a convenient way to destructure the elements of a tuple-like object or aggregate into individual variables.
Tuple-like objects in C++ include:
std::tuple: The standard tuple class provided by the C++ Standard Library.
std::pair: A specialized tuple with exactly two elements, also provided by the C++ Standard Library.
Custom user-defined types that mimic the behavior of tuples, such as structs with a fixed number of members.
RAII, Resource Acquisition is initialization,即资源获取即初始化。典型示例包括:std::lock_guard、defer。简单来说,就是在对象的构造方法中初始化资源,在析构函数中销毁资源。而构造函数与析构函数的调用是由编译器自动插入的,减轻了开发者的心智负担
private: inlinestatic Delegate _s_delegate{Foo::do_something}; // Use of class template 'Delegate' requires template arguments // Argument deduction not allowed in non-static class member (clang auto_not_allowed Delegate _delegate; };
// Using a pointer to a 2D array voidyourFunction1(bool (*rows)[9]){ // Access elements of the 2D array for (int i = 0; i < 9; i++) { for (int j = 0; j < 9; j++) { std::cout << rows[i][j] << " "; } std::cout << std::endl; } }
// Using a reference to a 2D array voidyourFunction2(bool (&rows)[9][9]){ // Access elements of the 2D array for (int i = 0; i < 9; i++) { for (int j = 0; j < 9; j++) { std::cout << rows[i][j] << " "; } std::cout << std::endl; } }
intmain(){ bool rows[9][9] = { // Initialize the array as needed };
// Pass the local variable to the functions yourFunction1(rows); yourFunction2(rows);
intsum(int count, ...){ int result = 0; va_list args; va_start(args, count); for (int i = 0; i < count; i++) { result += va_arg(args, int); } va_end(args); return result; }
Variable-length array (VLA), which is a feature not supported by standard C++. However, some compilers, particularly in C and as extensions in C++, do provide support for VLAs.
____ The allocated block ____ / \ +--------+--------------------+ | Header | Your data area ... | +--------+--------------------+ ^ | +-- The address you are given
14.2 Do parameter types require lvalue or rvalue references
14.3 Does the return type require lvalue or rvalue references