I’m writing a real-time audio application in C++ that does a beefy amount of processing on 10ms chunks of audio. I’m using miniaudio to handle the audio I/O. When benchmarking the amount of time the processing takes, i.e. recording the numbers from the following inside the callback function:
std::chrono::high_resolution_clock::time_point begin = std::chrono::high_resolution_clock::now();
process_audio(input, output);
std::chrono::high_resolution_clock::time_point end = std::chrono::high_resolution_clock::now();
float elapsed_microseconds = (float)std::chrono::duration_cast<std::chrono::microseconds>(end - begin).count();
a single process usually takes around 2.5ms, but occasionally down to 1.7ms and up to 4ms.
However, if I do the exact same thing in a non-real-time scenario (i.e. loading an audio file and processing it in a for loop without an interrupt/callback function) I get numbers solidly around 1.8ms, with very little variance.
At first I thought it might be an OS thing, but I’m seeing the same thing on Windows, Mac, and Linux. I also thought it could be something to do with miniaudio, but the same thing happens using RtAudio as well.
What is causing this to be slower and more unpredictable inside the callback function, and is there anything I can do to improve it?