I’m currently studying the Reno congestion control algorithm in the Linux kernel and I’m a bit confused about how it handles consecutive packet losses.
From my understanding, Reno uses an Additive Increase/Multiplicative Decrease (AIMD) approach. When packet loss is detected, the congestion window (cwnd) is halved. However, packet losses often occur consecutively. If the cwnd is halved for each packet loss, it would keep reducing continuously until it becomes very small, potentially as low as 2.
Consider the following scenario:
At time t1, a data packet that exceeds the current network capacity is sent.
At time t2, the acknowledgment for this packet loss is received by the sender.
During the time interval from t1 to t2, the cwnd continues to increase. However, since the network capacity is already exceeded, all packets sent during this period will be dropped. This would imply that the cwnd should continuously decrease, but in practice, halving the cwnd once should be sufficient to handle the congestion.
I have looked through the Linux kernel source code but haven’t found a clear explanation of how Reno handles consecutive packet losses. Could someone explain how Reno deals with this issue? Does it really halve the cwnd for each packet loss, or is there a mechanism to prevent it from reducing too aggressively?
Thanks in advance for your help!
贾连晨 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.