I’m getting PCM_16BIT audio data from a microphone and hope to present a live spectrum graph from the data. So I’ve gathered that is 2 bytes of signed data. So the data samples range from [-2^15-2^15].
That will be my input data to a FFT similar to this one that transforms the data in-place.
Do I need to normalize my input data to [-1,1] by dividing the data by 2^15? Or can I just throw anything at it? I suppose this could be purely a math question but maybe it depends on the algorithm. I’m hoping someone has experience with this.