I am receiving messages with latency information (LI, {date, latency}).
-
For statistical reasons: (I am open to approx. approaches, stats monitor will be updated in 6-60 seconds)
- I want to monitor (A=Average) average latency in last 60 seconds for incoming messages
- I want to monitor (R=Rate) count of messages exceeding threshold (T) / count of messages in last 60 seconds for incoming messages
-
And also I should generate an alert if messages exceeding the threshold T within an interval 0<I<2048 seconds (last I seconds) have a count greater than C.
- I=0 means from the beginning (not a sliding window)
- If an alert is created then I will reset the counters and interval.
What data structures and algorithms do you recommend for above problem? (My message receiving rate is 400 messages per second)
5
Since you say that approximations will work for you, what I would do is that I would pick a time interval like 1 second and I would treat it as a quantum. I would record all information of interest from every event happening during the quantum, and at the end of the quantum I would summarize everything that happened and discard the details in preparation for recording information about the next quantum.
The summaries generated for each quantum would be stored in a queue for running average calculations. A time quantum of one second and a historicity requirement of 2048 seconds would mean that your queue would only need to be up to 2048 entries long.
You appear to be willing to afford ten times the computing resources that are necessary to tackle the problem, so you can certainly compute averages from scratch if you wish, (advice: do it for testing anyway,) but if you care for a highly efficient algorithm for sliding window average calculations, it is as follows:
-
Let there be a queue Q of length N which can grow up to M. For the first M insertions, N grows from zero until it reaches M; after that, each insertion causes an item to be tossed from the other end, so that N never exceeds M.
-
Let there be a running total T initialized to 0.
-
Each time item I is inserted (possibly causing item X to be tossed) :
- Add the value of I to T.
- If N == M (meaning, X is being tossed) then Subtract the value of X from T.
- The new average is T/N.
So, the running average can be computed without revisiting all values in the queue, that is, in O(1) instead of O(N).
1
In the case I=0 (from beginning), the fastest way to compute the average values is this way. You should quantize your data in intervalls to discrete values like
- 0ms-1ms = 1
- 1ms-2ms = 2
- …
- 40ms-50ms = k
- 50ms-70ms = k+1
- …
- >5000ms = n (also for never responded)
The more unlikely the measured latency value is in real world, the wider should be the intervall. Inserting a latency value into this map is like this:
- discreteLatencyCount[indexForDiscreteLatency(measuredLatency)]++;
Average latency is then calculated by:
- avg=0.0
-
for(i=1 to n) { avg+=weightOfDiscreteLatencyFor[i]*discreteLatencyCount[i]; }
-
weightOfDiscreteLatencyFor[] is for simple solutions the mid of the intervall or depends on your statistic distribution – Try to record a fairly large dataset and tune the values to minimize the real difference for this dataset.
Count messages is then calculated by:
* count=0;
* for(i=1 to n) { count+=discreteLatencyCount[i]; }
working with fixed sliding windows works like this too, but you will have to subtract the outgoing events (which are leaving the sliding window) from your counters. This implementation is a good solution, if you need a fast (or constant time) computation of avg and count.
But it is depending on your case whether this solution is good enough for you. Instead you could also use ready solutions like a round robin database instead. Please also consider locking issues in multithread environments.
I came to this solution after a colleage in the telco industry teased me with this question a few years ago and then he told me they implemented it this way too, after presenting this solution.
1