In Python, with sounddevice, I am trying to simultaneously play on an output and record on an input, then to plot the signals on top of each other in a way to represent what is happening in the real world.
I’m using the Stream class and currently my callback function looks like this:
def callback(indata, outdata, frames, time, status):
if status:
print(status)
if not hasattr(callback, "lastframe"):
callback.lastframe = 0 # Initialiser la première fois
lastframe = callback.lastframe
end = lastframe+frames
size = len(out[lastframe:end])
outdata[:size,0] = out[lastframe:end]
outsig['signal'] = np.append(outsig['signal'],outdata[:,0])
outsig['time'] = np.append(outsig['time'],np.linspace(time.outputBufferDacTime,time.outputBufferDacTime+(frames-1)/48e3,frames))
callback.lastframe = end
insig['signal'] = np.append(insig['signal'],indata)
insig['time'] = np.append(insig['time'],np.linspace(time.inputBufferAdcTime,time.inputBufferAdcTime+(frames-1)/48e3,frames))
if size < frames:
raise sd.CallbackStop
I know this callback isn’t perfect, but in my point of view it should be a good proof of concept of what i’m trying to do.
I’m getting the time.outputBufferDacTime and time.inputBufferAdcTime variable to know when the first sample of the block will be played (or has been recorded) and then knowing the sampling frequency and the blocksize, I can build a time vector for the output and input signals.
I’m close to the results I’m expecting but sometimes, the next block don’t start where the previous block ended. (see plot below)
This artefact seems to come from my code because I can’t hear it on the output, and they aren’t synchronized between the input and output, so I’m assuming they are not present in the “real world”.
Any idea where this can come from ?