I am transferring file from one server to another. So, to estimate the time it would take to transfer some GB’s of file over the network, I am pinging to that IP and taking the average time.
For ex: i ping to 172.26.26.36 I get the average round trip time to be x ms, since ping send 32 bytes of data each time. I estimate speed of network to be 2*32*8(bits)/x = y Mbps –> multiplication with 2 because its average round trip time.
So transferring 5GB of data will take 5000/y seconds
Am I correct in my method of estimating time.
If you find any mistake or any other good method please share.
Short answer: your method is wrong.
There is a large difference between bandwidth and latency.
Latency only tells you how “responsive” a server is. It doesn’t tell you how much data you can send through a connection.
A connection can both have a high latency and also be able to transfer data at high speed in high volumes.
Consider the bandwidth of a boeing 747 filled with dvd’s. It has horrible latency, but the amount of data it can transport is much much more then any internet connection you have.
1
Latency and bandwidth are largely orthogonal things.
They do relate in that mechanisms like TCP window sizes and latency put an upper ceiling on the effective bandwidth for long-haul traffic.
The only way you can effectively determine throughput for transfer on a connection is to transfer on a connection and measure it. From the measurements you can estimate the time it would take if conditions remain the same.