Is there a more general form of convolution for Entropy-Encoding?
I have some unidimensional waveform data, and I wish to exploit its nearby point correlation before entropy-encoding it. The most basic operation that does this is Delta Encoding (using the finite difference formula), and that is simply a [1,-1] filter (convolution). Since I’ll be performing Golomb-Rice Encoding afterwards, I’ll need the operation to minimize both the mean and the standard deviation of the resulting distribution. I’ll also need it to be reversible and not too computationally intensive.
How should I go about Arithmetic Encoding a sequence of signed 8 bit integers?
So I’m self studying information theory, and I think I somewhat grasp the theory behind Huffman coding, entropy etc. However, I’m wondering how I should apply arithmetic encoding to, for example, an 8-bit int waveform in python practically. I understand that arithmetic coding has had innumerous optimizations over the years, so is there a good library I could use? Should I even be using python in the first place?