First of all, apologies if this is posted in the wrong site. I thought it is more suitable here than SO because the question is quite generic, more an algorithm question than a specific programming question.
Now, let me describe the scenario:
I have a cloud server that, due to some requirements, has to connect to another server in Head Office via a VPN. There are two public IP addresses the server can connect to, let’s say IP_A and IP_B, served by two different ISPs.
Now, ISP A and ISP B are not epitomes of stability; oftentimes their connections become flaky, although rarely at the same time. So, I’ve written a daemon that will periodically test connection quality using ping
. If 4 pings in a row fail, then the daemon will disconnect the currently connected VPN and switch over to the other IP address.
However, looking through the log, sometimes I see a series of ping failures, but not necessarily 4 in a row. Example: fail – success – success – fail – success – success (and repeating for sometime). This looks like a sure sign of instability, which my simple 4-in-a-row detection just won’t detect.
I’m thinking of calculating a ‘decaying instability score’, so that when instability reaches a certain threshold, the VPN switch happens.
My early thought would be to do it like this:
- For every failed ping, increase by E1^(M*N), where N is the number of fails-in-a-row
- For every successful ping, decrease by E2
- If a connection switch happens, reset the score to 0
Do you think this will be a good idea? Or perhaps you have a better strategy for calculating the “instability score”?
3