// Function to print the current stack trace voidprint_stack_trace(){ structbacktrace_state* state = backtrace_create_state(NULL, BACKTRACE_SUPPORTS_THREADS, error_callback, NULL); backtrace_full(state, 0, callback, error_callback, NULL); }
// Sample function that calls another function to generate a stack trace voidmy_function(){ print_stack_trace(); }
intmain(){ my_function(); return0; }
1 2
gcc -o main main.cpp -lstdc++ -std=gnu++17 -lbacktrace -g ./main
// Initialize context to the current machine state. unw_getcontext(&context); unw_init_local(&cursor, &context);
// Walk the stack up, one frame at a time. while (unw_step(&cursor) > 0) { unw_word_t offset, pc; char sym[256];
if (unw_get_reg(&cursor, UNW_REG_IP, &pc)) { std::cout << "Error: cannot read program counter" << std::endl; break; }
if (unw_get_proc_name(&cursor, sym, sizeof(sym), &offset) == 0) { int status; // Attempt to demangle the symbol char* demangled_name = abi::__cxa_demangle(sym, nullptr, nullptr, &status);
std::cout << "0x" << std::hex << pc << ": ";
if (status == 0 && demangled_name) { std::cout << demangled_name << " (+0x" << std::hex << offset << ")" << std::endl; free(demangled_name); // Free the demangled name } else { // If demangling failed, print the mangled name std::cout << sym << " (+0x" << std::hex << offset << ")" << std::endl; } } else { std::cout << " -- error: unable to obtain symbol name for this frame" << std::endl; } } }
# -DUNW_LOCAL_ONLY is mandatory, otherwise some link error may occur, like: # undefined reference to `_Ux86_64_init_local' # undefined reference to `_Ux86_64_get_reg' # undefined reference to `_Ux86_64_get_proc_name' # undefined reference to `_Ux86_64_step' gcc -o main main.cpp -lstdc++ -std=gnu++17 -lunwind -DUNW_LOCAL_ONLY ./main
Bison is a general-purpose parser generator that converts an annotated context-free grammar into a deterministic LR or generalized LR (GLR) parser employing LALR(1) parser tables. As an experimental feature, Bison can also generate IELR(1) or canonical LR(1) parser tables. Once you are proficient with Bison, you can use it to develop a wide range of language parsers, from those used in simple desk calculators to complex programming languages.
Boost.Hana is a library for metaprogramming in C++ that provides a modern, powerful, and easy-to-use set of tools for developers. It is part of the Boost libraries, which are known for their high-quality, peer-reviewed, and portable C++ libraries. Here are some key points about Boost.Hana:
Purpose: Boost.Hana aims to provide a comprehensive metaprogramming framework for C++, allowing developers to perform computations at compile-time with an expressive and efficient interface.
Features:
Compile-time Algorithms: Includes a wide range of algorithms for manipulating types and values at compile time.
Heterogeneous Containers: Supports containers that can hold elements of different types, which is useful for various advanced C++ programming techniques.
Integrations: Works seamlessly with other parts of the C++ standard library and other Boost libraries.
Usage: It is used for tasks such as type introspection, compile-time computations, and advanced type manipulations, making it a valuable tool for developers dealing with complex C++ codebases.
Performance: Boost.Hana is designed with performance in mind, leveraging modern C++ features to minimize compile-time overhead and runtime inefficiencies.
Modern C++: Embraces the latest standards of C++ (C++11 and beyond), making use of features such as constexpr, variadic templates, and template metaprogramming to provide a robust and future-proof library.
std::ostream& operator<<(std::ostream& os, Color color) { switch (color) { case RED: os << "RED"; break; case BLACK: os << "BLACK"; break; case WHITE: os << "WHITE"; break; } return os; }
Boost.Stacktrace provides several options for printing stack traces, depending on the underlying technology used to capture the stack information:
BOOST_STACKTRACE_USE_BACKTRACE: uses the backtrace function from the GNU C Library, which is available on most UNIX-like systems including Linux.
BOOST_STACKTRACE_USE_ADDR2LINE: uses the addr2line utility from GNU binutils to convert addresses into file names and line numbers, providing more detailed information.
BOOST_STACKTRACE_USE_NOOP: doesn’t capture the stack trace at all. This can be used when you want to disable stack tracing completely.
BOOST_STACKTRACE_USE_WINDBG: utilizes the Windows Debug Help Library when compiling for Windows.
This approach works fine with gcc-10.3.0, but can’t work with higher versions like gcc-11.3.0, gcc-12.3.0. Don’t know why so far.
Compile:
1 2 3 4
# -ldl: link libdl # -g: generate debug information gcc -o main main.cpp -DBOOST_STACKTRACE_USE_ADDR2LINE -lstdc++ -std=gnu++17 -Wl,-wrap=__cxa_throw -ldl -g ./main
Output:
1 2 3 4 5 6 7 8 9 10 11 12
Boost version: 1.84.0 0# boost::stacktrace::basic_stacktrace<std::allocator<boost::stacktrace::frame> >::basic_stacktrace() at /usr/local/include/boost/stacktrace/stacktrace.hpp:129 1# foo(int) at /root/main.cpp:9 2# foo(int) at /root/main.cpp:10 3# foo(int) at /root/main.cpp:10 4# foo(int) at /root/main.cpp:10 5# foo(int) at /root/main.cpp:10 6# foo(int) at /root/main.cpp:10 7# main at /root/main.cpp:36 8# 0x00007F14FEF4E24A in /lib/x86_64-linux-gnu/libc.so.6 9# __libc_start_main in /lib/x86_64-linux-gnu/libc.so.6 10# _start in ./main
2.4.2 With libbacktrace
Compile:
1 2 3 4 5
# -ldl: link libdl # -g: generate debug information # -lbacktrace: link libbacktrace gcc -o main main.cpp -DBOOST_STACKTRACE_USE_BACKTRACE -lstdc++ -std=gnu++17 -Wl,-wrap=__cxa_throw -ldl -lbacktrace -g ./main
Output:
1 2 3 4 5 6 7 8 9 10 11 12
Boost version: 1.84.0 0# __wrap___cxa_throw at /root/main.cpp:21 1# foo(int) at /root/main.cpp:9 2# foo(int) at /root/main.cpp:10 3# foo(int) at /root/main.cpp:10 4# foo(int) at /root/main.cpp:10 5# foo(int) at /root/main.cpp:10 6# foo(int) at /root/main.cpp:10 7# main at /root/main.cpp:36 8# __libc_start_call_main at ../sysdeps/nptl/libc_start_call_main.h:74 9# __libc_start_main at ../csu/libc-start.c:347 10# _start in ./main
static void BM_StringCreation(benchmark::State& state) { for (auto _ : state) std::string empty_string; } // Register the function as a benchmark BENCHMARK(BM_StringCreation);
// Define another benchmark static void BM_StringCopy(benchmark::State& state) { std::string x = "hello"; for (auto _ : state) std::string copy(x); } BENCHMARK(BM_StringCopy);
Breakpad is a library and tool suite that allows you to distribute an application to users with compiler-provided debugging information removed, record crashes in compact “minidump” files, send them back to your server, and produce C and C++ stack traces from these minidumps. Breakpad can also write minidumps on request for programs that have not crashed.
It includes following tools:
minidump_stackwalk: This tool processes minidump files to produce a human-readable stack trace. It uses symbol files to translate memory addresses into function names, file names, and line numbers
minidump_stackwalk <minidump_file> <symbol_path>
microdump_stackwalk: Similar to minidump_stackwalk, but specifically designed to process microdump files, which are smaller and contain less information than full minidumps
// Write the table to a Parquet file std::string file_path = "data.parquet"; std::shared_ptr<arrow::io::FileOutputStream> outfile; ARROW_RETURN_NOT_OK(arrow::io::FileOutputStream::Open(file_path).Value(&outfile)); ARROW_RETURN_NOT_OK(parquet::arrow::WriteTable(*table, arrow::default_memory_pool(), outfile, 3));
// Read the Parquet file back into a table std::shared_ptr<arrow::io::ReadableFile> infile; ARROW_RETURN_NOT_OK(arrow::io::ReadableFile::Open(file_path, arrow::default_memory_pool()).Value(&infile));
// Read the Parquet file back into a table std::shared_ptr<arrow::io::ReadableFile> infile; ARROW_RETURN_NOT_OK(arrow::io::ReadableFile::Open(file_path, arrow::default_memory_pool()).Value(&infile));
parquet::FileDecryptionProperties::Builder file_decryption_props_builder; // Why footer key required if set_plaintext_footer is called file_decryption_props_builder.footer_key(footer_key); if (!column_use_footer_key) { parquet::ColumnPathToDecryptionPropertiesMap decrypted_columns; { parquet::ColumnDecryptionProperties::Builder column_decryption_props_builder("int_column"); column_decryption_props_builder.key(int_column_key); } { parquet::ColumnDecryptionProperties::Builder column_decryption_props_builder("double_column"); column_decryption_props_builder.key(double_column_key); } { parquet::ColumnDecryptionProperties::Builder column_decryption_props_builder("str_column"); column_decryption_props_builder.key(str_column_key); } file_decryption_props_builder.column_keys(decrypted_columns); } std::shared_ptr<parquet::FileDecryptionProperties> file_decryption_props = file_decryption_props_builder.build();
git clone -b v0.16.0 https://github.com/apache/thrift.git cd thrift
./bootstrap.sh # you can build specific lib by using --with-xxx or --without-xxx ./configure --with-cpp=yes --with-java=no --with-python=no --with-py3=no --with-nodejs=no make -j $(( (cores=$(nproc))>1?cores/2:1 )) sudo make install
echo'/usr/local/lib' | sudo tee /etc/ld.so.conf.d/thrift.conf && sudo ldconfig
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
cat > example.thrift << 'EOF' namespace cpp example
mkdir -p jni_demo/build cd jni_demo cat > HelloWorld.java << 'EOF' public class HelloWorld { public void greet() { System.out.println("Hello from Java!"); }
public static void main(String[] args) { new HelloWorld().greet(); } } EOF
JNI cannot work smoothly with fat-jar built by plugin spring-boot-maven-plugin. Because the class path is started with BOOT-INF/ or BOOT-INF/lib/, the default classloader cannot find it.
The following code can work with org.springframework.boot:spring-boot-maven-plugin:2.1.4.RELEASE, no guarantee that it can work with other versions because the Java API may vary.
/** * getJNIEnv: A helper function to get the JNIEnv* for the given thread. * If no JVM exists, then one will be created. JVM command line arguments * are obtained from the LIBHDFS_OPTS environment variable. * * Implementation note: we rely on POSIX thread-local storage (tls). * This allows us to associate a destructor function with each thread, that * will detach the thread from the Java VM when the thread terminates. If we * failt to do this, it will cause a memory leak. * * However, POSIX TLS is not the most efficient way to do things. It requires a * key to be initialized before it can be used. Since we don't know if this key * is initialized at the start of this function, we have to lock a mutex first * and check. Luckily, most operating systems support the more efficient * __thread construct, which is initialized by the linker. * * @param: None. * @return The JNIEnv* corresponding to the thread. */ JNIEnv* getJNIEnv(void)
Build libhdfs: You need to download hadoop somewhere(this works well with tag rel/release-3.4.0), and set project path to env export HADOOP_PATH=/path/to/hadoop. And we need to comment out some of the hdfs classes initialization code if we don’t need hdfs:
#define ASSERT(expr) \ if (!(expr)) { \ if (env->ExceptionOccurred()) { \ env->ExceptionDescribe(); \ exit(1); \ } \ }
int main() { std::vector<std::thread> threads; for (int i = 0; i < 10; ++i) { threads.emplace_back([i]() { auto* env = getJNIEnv();
// Get java.lang.System jclass cls_system = env->FindClass("java/lang/System"); ASSERT(cls_system != nullptr);
// Get out field jfieldID f_out = env->GetStaticFieldID(cls_system, "out", "Ljava/io/PrintStream;"); jobject obj_out = env->GetStaticObjectField(cls_system, f_out);
// Get Class jclass cls_stream = env->FindClass("java/io/PrintStream"); ASSERT(cls_stream != nullptr);
// Invoke std::string content = "Hello world, this is thread: " + std::to_string(i); jstring jcontent = env->NewStringUTF(const_cast<const char*>((content).c_str())); env->CallVoidMethod(obj_out, m_println, jcontent); env->DeleteLocalRef(jcontent); }); } for (int i = 0; i < 10; ++i) { threads[i].join(); }
return 0; } EOF
# libjvm.so may be in (${JAVA_HOME}/lib/server, ${JAVA_HOME}/jre/lib/amd64/server) JVM_SO_PATH=$(find $(readlink -f ${JAVA_HOME}) -name "libjvm.so") JVM_SO_PATH=${JVM_SO_PATH%/*}
Here is an assumption: Jvm will use signal SIGSEGV to indicate that the gc process should run. And you can check it by gdb or lldb with a jni program, during which you may be interrupted frequently by SIGSEGV
JNI doesn’t support wildcard *, so you need to generate all jar paths and join them with :, and then pass to -Djava.class.path= option.
7.5 FAQ
7.5.1 java.lang.IllegalMonitorStateException
JNI cannot work well with coroutines like brpc's bthread for several reasons (pthread mode):
Threading Model Mismatch: JNI expects a specific threading model that aligns with Java’s managed threads. Coroutines, especially those implemented with bthread, may have different threading and execution models, leading to mismatches and unexpected behavior. JNI methods need to be called from specific threads, and coroutine libraries might not guarantee that.
Native Resource Management: Coroutines can suspend and resume execution, which complicates resource management in native code. Native code called via JNI might expect resources to be available for the duration of a method call, but coroutines can pause execution, potentially leading to resource leaks or other issues if the native code does not handle this correctly.
Synchronization Issues: Coroutines introduce their own scheduling and synchronization mechanisms, which can interfere with the synchronization primitives used in native code. This can lead to race conditions, deadlocks, or inconsistent state between the Java and native parts of an application.
Stack Management: Coroutines often manipulate the call stack in ways that traditional threading models do not. This can create problems for JNI, which relies on the standard call stack for invoking methods and managing local references. If a coroutine library like bthread changes the stack layout, it can disrupt JNI’s operation.
Callback Handling: JNI often involves callbacks from native code to Java, which can be problematic when using coroutines. The coroutine may not be in a state to handle a callback if it is suspended, leading to missed callbacks or crashes.
Context Switching Overhead: Coroutines are designed to minimize context switching overhead, but integrating them with JNI can reintroduce this overhead, negating the benefits of using coroutines in the first place.
Complexity in Debugging: The combination of Java, JNI, and coroutine libraries like bthread can make debugging very difficult. The interaction between the different layers can create complex bugs that are hard to reproduce and fix.
find_package(Poco REQUIRED COMPONENTS Foundation Net XML JSON) target_link_libraries(${PROJECT_NAME} Poco::Foundation Poco::Net Poco::XML Poco::JSON) EOF
2024.05.30 08:23:56.061053 <Information> main_1: Hello, World! 2024.05.30 08:23:56.061093 <Information> main_2: Hello, World! 2024-05-30 08:23:56 MultiChannelLogger: This is an informational message. 2024-05-30 08:23:56 MultiChannelLogger: This is a warning message.
find_package(Poco REQUIRED COMPONENTS Foundation Net XML JSON) target_link_libraries(${PROJECT_NAME} Poco::Foundation Poco::Net Poco::XML Poco::JSON) EOF
int main() { // JSON string to parse std::string jsonString = R"({"name":"John Doe","age":30,"isDeveloper":true})";
// Parse the JSON string Poco::JSON::Parser parser; Poco::Dynamic::Var result = parser.parse(jsonString); Poco::JSON::Object::Ptr jsonObject = result.extract<Poco::JSON::Object::Ptr>();
// Extract values std::string name = jsonObject->getValue<std::string>("name"); int age = jsonObject->getValue<int>("age"); bool isDeveloper = jsonObject->getValue<bool>("isDeveloper");
std::ostream& ostr = response.send(); ostr << "<html><head><title>Hello</title></head>"; ostr << "<body><h1>Hello from Poco HTTP Server</h1></body></html>"; ostr.flush(); } };
class HelloRequestHandlerFactory : public Poco::Net::HTTPRequestHandlerFactory { public: Poco::Net::HTTPRequestHandler* createRequestHandler(const Poco::Net::HTTPServerRequest& request) override { return new HelloRequestHandler(); } };
class HTTPServerApp : public Poco::Util::ServerApplication { protected: int main(const std::vector<std::string>& args) { Poco::Net::ServerSocket svs({"0.0.0.0", 9080}); // set the server port here /// Sets the following default values: /// - timeout: 60 seconds /// - keepAlive: true /// - maxKeepAliveRequests: 0 /// - keepAliveTimeout: 10 seconds Poco::Net::HTTPServer server(new HelloRequestHandlerFactory(), svs, new Poco::Net::HTTPServerParams());
server.start(); std::cout << "HTTP Server started on port 9080." << std::endl; // Wait for CTRL-C or kill waitForTerminationRequest(); server.stop(); return Application::EXIT_OK; } }; int main(int argc, char** argv) { std::thread server_thread([argc, argv]() { HTTPServerApp app; app.run(argc, argv); });
intmain(){ // Create an MD5 engine Poco::MD5Engine md5;
// Create a DigestOutputStream that writes to the MD5 engine Poco::DigestOutputStream dos(md5);
// Input string to hash std::string input = "Hello, World!";
// Write the input string to the DigestOutputStream dos << input; dos.close();
// Get the digest as a string of hexadecimal numbers const Poco::DigestEngine::Digest& digest = md5.digest(); std::string hash(Poco::DigestEngine::digestToHex(digest));
// Print the hash std::cout << "MD5 hash of '" << input << "' is: " << hash << std::endl;
return0; } EOF
gcc -o main main.cpp -lstdc++ -std=gnu++17 -lPocoFoundation ./main
8.2 sqlpp11
How to integrate:
1 2
target_link_libraries(xxx sqlpp11)
How to create cpp header files:
1 2 3 4 5 6 7 8 9
cat > /tmp/foo.sql << 'EOF' CREATE TABLE foo ( id bigint, name varchar(50), hasFun bool ); EOF
# Include subdirectories add_subdirectory(contrib/sqlpp11) add_subdirectory(contrib/SQLiteCpp)
# Link against libraries target_link_libraries(${PROJECT_NAME} sqlpp11) target_link_libraries(${PROJECT_NAME} SQLiteCpp sqlite3 pthread dl) EOF
# ddl cat > users.ddl << 'EOF' CREATE TABLE users ( id INTEGER NOT NULL, first_name TEXT NOT NULL, last_name TEXT NOT NULL, age INTEGER NOT NULL, PRIMARY KEY(id) ); EOF
# Create headers contrib/sqlpp11/scripts/ddl2cpp users.ddl users Test
Sqlite3 debug: Preparing: 'CREATE TABLE users ( id INTEGER NOT NULL, first_name TEXT NOT NULL, last_name TEXT NOT NULL, age INTEGER NOT NULL, PRIMARY KEY(id))' INSERT INTO users (id,first_name,last_name,age) VALUES(10000001,'Emma','Watson',15) Sqlite3 debug: Preparing: 'INSERT INTO users (id,first_name,last_name,age) VALUES(10000001,'Emma','Watson',15)' INSERT INTO users (id,first_name,last_name,age) VALUES(10000002,'Leo','Grant',18) Sqlite3 debug: Preparing: 'INSERT INTO users (id,first_name,last_name,age) VALUES(10000002,'Leo','Grant',18)' SELECT users.id,users.first_name,users.last_name,users.age FROM users Sqlite3 debug: Preparing: 'SELECT users.id,users.first_name,users.last_name,users.age FROM users' Sqlite3 debug: Constructing bind result, using handle at 0x1f7de20 Sqlite3 debug: Accessing next row of handle at 0x1f7de20 Sqlite3 debug: binding integral result 0 at index: 0 Sqlite3 debug: binding text result at index: 1 Sqlite3 debug: binding text result at index: 2 Sqlite3 debug: binding integral result 0 at index: 3 -> id=10000001, firstName=Emma, lastName=Watson, age=15 Sqlite3 debug: Accessing next row of handle at 0x1f7de20 Sqlite3 debug: binding integral result 10000001 at index: 0 Sqlite3 debug: binding text result at index: 1 Sqlite3 debug: binding text result at index: 2 Sqlite3 debug: binding integral result 15 at index: 3 -> id=10000002, firstName=Leo, lastName=Grant, age=18 Sqlite3 debug: Accessing next row of handle at 0x1f7de20 SELECT users.id,users.first_name,users.last_name,users.age FROM users WHERE (users.age<=20) Sqlite3 debug: Preparing: 'SELECT users.id,users.first_name,users.last_name,users.age FROM users WHERE (users.age<=20)' Sqlite3 debug: Constructing bind result, using handle at 0x1f7de40 Sqlite3 debug: Accessing next row of handle at 0x1f7de40 Sqlite3 debug: binding integral result 0 at index: 0 Sqlite3 debug: binding text result at index: 1 Sqlite3 debug: binding text result at index: 2 Sqlite3 debug: binding integral result 0 at index: 3 -> id=10000001, firstName=Emma, lastName=Watson, age=15 Sqlite3 debug: Accessing next row of handle at 0x1f7de40 Sqlite3 debug: binding integral result 10000001 at index: 0 Sqlite3 debug: binding text result at index: 1 Sqlite3 debug: binding text result at index: 2 Sqlite3 debug: binding integral result 15 at index: 3 -> id=10000002, firstName=Leo, lastName=Grant, age=18 Sqlite3 debug: Accessing next row of handle at 0x1f7de40 DELETE FROM users WHERE (users.id=10000001) Sqlite3 debug: Preparing: 'DELETE FROM users WHERE (users.id=10000001)'
# link_directories must be placed before add_executable or add_library include_directories(contrib/mysql-connector-c-6.1.11-linux-glibc2.12-x86_64/include) link_directories(contrib/mysql-connector-c-6.1.11-linux-glibc2.12-x86_64/lib)
# Include subdirectories add_subdirectory(contrib/sqlpp11) add_subdirectory(contrib/mariadb-connector-c)
# Include header files target_include_directories (${PROJECT_NAME} PUBLIC "${CMAKE_SOURCE_DIR}/contrib/mariadb-connector-c/include") target_include_directories (${PROJECT_NAME} PUBLIC "${CMAKE_BINARY_DIR}/contrib/mariadb-connector-c/include")
# Link against libraries target_link_libraries(${PROJECT_NAME} sqlpp11 mariadbclient) EOF
# ddl cat > users.ddl << 'EOF' CREATE TABLE users ( id BIGINT NOT NULL, first_name VARCHAR(16) NOT NULL, last_name VARCHAR(16) NOT NULL, age SMALLINT NOT NULL, PRIMARY KEY(id) ); EOF
# Create headers contrib/sqlpp11/scripts/ddl2cpp users.ddl users Test
# Download source code of these two project mkdir -p contrib wget -O contrib/curl-8.8.0.tar.gz https://curl.se/download/curl-8.8.0.tar.gz tar -zxf contrib/curl-8.8.0.tar.gz -C contrib
# Include subdirectories add_subdirectory(contrib/curl-8.8.0)
# Link against libraries target_link_libraries(${PROJECT_NAME} CURL::libcurl) EOF
cat > main.cpp << 'EOF' #include <curl/curl.h>
#include <iostream> #include <string>
// This callback function gets called by libcurl as soon as there is data received that needs to be saved. // The size of the data pointed to by *ptr is size multiplied by nmemb, it will not be zero terminated. // Return the number of bytes actually taken care of. // If that amount differs from the amount passed to your function, it'll signal an error to the library. size_t WriteCallback(void* contents, size_t size, size_t nmemb, void* userp) { ((std::string*)userp)->append((char*)contents, size * nmemb); return size * nmemb; } int main() { CURL* curl; CURLcode res; std::string readBuffer; // Initialize a CURL session curl = curl_easy_init(); if (!curl) { std::cerr << "Failed to initialize CURL session" << std::endl; return 1; } struct curl_slist* headers = NULL; // Initialize header list headers = curl_slist_append(headers, "Content-Type: application/json"); headers = curl_slist_append(headers, "Custom-Header: CustomValue"); // Set the URL for the request curl_easy_setopt(curl, CURLOPT_URL, "http://jsonplaceholder.typicode.com/posts"); // Set the custom headers curl_easy_setopt(curl, CURLOPT_HTTPHEADER, headers); // Set the callback function to save the data curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, WriteCallback); // Set the data pointer to save the response curl_easy_setopt(curl, CURLOPT_WRITEDATA, &readBuffer); // Perform the HTTP request res = curl_easy_perform(curl); if (res != CURLE_OK) { std::cerr << "curl_easy_perform() failed: " << curl_easy_strerror(res) << std::endl; } else { std::cout << readBuffer << std::endl; } // Cleanup header list curl_slist_free_all(headers); // Cleanup CURL session curl_easy_cleanup(curl); return 0; } EOF # compile and run cmake -B build -DCMAKE_EXPORT_COMPILE_COMMANDS=ON cmake --build build -j $(( (cores=$(nproc))>1?cores/2:1 )) build/curl_demo