I am trying to implement Golomb coding, but I don’t understand how it’s tuned to obtain optimal code.
It is said that
Golomb coding uses a tunable parameter M to divide an input value into two parts: q, the result of a division by M, and r, the remainder. The quotient is sent in unary coding, followed by the remainder in truncated binary encoding.
I don’t understand how should I choose the parameter M – I can’t see how the explanation in Wikipedia relates to actual data. I believe it should be related to statistical moments, is that true?
For example, if I have this example set:
{3,4,4,4,3,1,2,2,3,1,2,1,4,1,2,2,2,2,1,1,2,2,1}
I believe M should be very small for this kind of data. I bet it’s either 1 or 2. It’s mean is ~2.2 and standard deviation is ~1.1. My intuition would tell me to choose 2.
Another dataset here:
{2,7,11,19,6,2,6,13,11,1,5,2,19,7,6,9,6,7,2,4,5,12,3}
This time the mean is ~7.2 and standard deviation is ~5.0.
Is 7 the right value in this case? And should I prefer Rice code (use 8 as it is a power of 2) if I get a value like 7?
I understand that division will be easier if I use Rice coding, but are there any benefits in NOT using it? I mean – 3 bits will be used for remainder in either case, how could pure Golomb code be more optimal then?
One more nuance – Golomb code is for nonnegative integers. If I have positive integers instead, should I save x-1 instead? It would change a lot for the first of the mentioned datasets.
3
The Golomb-Rice algorithm doesn’t specify how to find the optimal parameters, and in the general case you will have to try to infer the posterior probability of symbol occurrences in the dataset to estimate the optimal value of M. Note that it is a common choice to have M = 2 k, a power of 2, as coding in this case is simple, and then discuss k instead. The search for the optimal k is usually done exhaustively over the dataset.
Following the above you can now understand why the wikipedia article you link to doesn’t refer to any actual data, but says the follows as a way of an example,
Given an alphabet of two symbols, or a set of two events, P and Q, with probabilities p and (1 − p) respectively, where p ≥ 1/2, Golomb coding
can be used to encode runs of zero or more P’s separated by single Q’s. In this application, the best setting of the parameter M is the
nearest integer to frac{-1}{log_{2}p}.
In the case where the probability p is known, where we have the prior equivalent to the posterior distribution, it can be shown that there is a best known value for M, but not otherwise.
Practically what this means is that you have to have a fairly good idea of how your eventual dataset will look like, possibly further increasing confidence with re-sampling methods (e.g., bootstrapping), and then searching exhaustively1 for the optimal parameter value in the sample datasets – the k that minimises the expected code length. Then you use an average value of the k you’ve decided on for the future datasets. Some implementations store a table of input dataset characteristics and adaptively select (i.e. change) the code parameters when they detect that the input pattern shifts. For example it is common to evaluating the running characteristics of the input sequence, mean, variance, etc, and select accordingly when thresholds are breached.
1 There are precise bounds that could be established for the value of k for input integers that follow certain distributions, saving on the fullness of the exhaustive search. However, for many integers in real datasets you have to deal with, the distribution cannot be easily bundled into a convenient approximation of a uniform random variable – for example, have you heard of Benford’s law? Notwithstanding, note that close to optimal selection of parameters for the encoding would rarely significantly differ in outcome from optimal selection, in practical implementations.
5