I am comparing different implementations of a threadpool. One approach, containing locks and condition variables, is substantially (~100 microseconds) slower than another using a lock-free implementation.
Of course, I can rationalize why the lock-free implementation is faster than the one with locks and condition variables but I want to go to the heart of the matter and obtain data comparing the approaches (other than the total runtime).
A snippet of the slower implementation is:
void ThreadMain(std::stop_token stoken, ThreadPool* pool,
std::atomic_flag& start,
std::vector<ThreadPool::timestamp>& stamps) {
while (!stoken.stop_requested()) {
std::unique_lock<std::mutex> lock(pool->mutex);
pool->mutex_condition.wait(lock, [&] { return start.test(); });
stamps.push_back(high_resolution_clock::now());
if (stoken.stop_requested()) {
return;
}
int i{0};
while ((i = pool->idx.fetch_sub(1) - 1) > -1) {
(*(pool->task))(pool->ctx, i);
}
start.clear();
if ((pool->completed_tasks.fetch_add(1) + 1) == pool->threads.size()) {
pool->completed.notify_one();
}
}
}
void ThreadPool::QueueTask(ThreadPool::function* function, void* context,
int r) {
{
std::unique_lock<std::mutex> lock(mutex);
task = function;
ctx = context;
completed_tasks = 0;
idx.store(r, order); // ensure that every thread know
for (std::atomic_flag& start : starts) {
start.test_and_set();
}
}
mutex_condition.notify_all();
timestamps[0].push_back(high_resolution_clock::now());
int i{0};
while ((i = idx.fetch_sub(1, order) - 1) > -1) {
(*(function))(context, i);
}
{
std::unique_lock<std::mutex> lock(mutex);
completed.wait(
lock, [this] { return completed_tasks.load() == threads.size(); });
for (std::atomic_flag& flag : starts) {
flag.clear();
}
completed_tasks = 0;
}
}
The main thread accepts a new workload in function ThreadPool::QueueTask
, sets this function, and copies the context of the function. The main thread then sets the condition variable for each thread and finally wakes them up. After all this is done, I take the current time with timestamps[0].push_back(high_resolution_clock::now());
. The worker threads cycle through void ThreadMain(std::stop_token stoken, ThreadPool* pool, std::atomic_flag& start, std::vector<ThreadPool::timestamp>& stamps)
. The worker threads need wake from their slumber when they pass pool->mutex_condition.wait(lock, [&] { return start.test(); });
. I define the time when the worker threads become useful again with stamps.push_back(high_resolution_clock::now());
. The duration between the main thread waking the other workers and them starting work is about 100 microseconds. These 100 microseconds amount to the runtime difference between the fast and slow implementation of the threadpool.
I would like to know exactly what happens in these 100 microseconds. I could think of either using perf or tracing. I would speculate that looking at the stack trace of the worker threads between the main thread calling notify_all
and the worker thread passing pool->mutex_condition.wait(lock, [&] { return start.test(); });
is more informative. That is just a guess, and I am not sure how to implement this.
The relevant part of the application runs for about one ms. I work with a Linux test system. What can I do to document why one approach is slower than the other?