I have TCP data being transferred from a vagrant vm to the host. The interface on the host shows up as vboxnet0
. On the host, I am trying to achieve an effect similar to what one would have when they are connected to a slow link. So, to do that, I created HTB qdisc on the host (vboxnet0) and attached netem to it to simulate delays. I quickly realized that these qdiscs act on egress traffic rather than ingress traffic. This does not produce the intended effect on the queue (I am measuring the build-up and release of packets in the queue due to certain link conditions).
To get around this issue, I have tried using IFB and configuring qdiscs such that ingress traffic on a particular interface is treated as egress traffic on ifb interface. However, I can’t seem to understand the flow of the packets here. So far, this is what I have tried:
tc qdisc add dev vboxnet0 ingress // Adding the ingress queue on the VM's interface on the host.
tc filter add dev vboxnet0 parent ffff: protocol ip prio 1 u32 match u32 0 0 action mirred egress redirect dev ifb0 // Mirroring the ingress traffic
tc qdisc add dev ifb0 root handle 1: htb default 10 // Adding a HTB to the IFB interface
tc class add dev ifb0 parent 1: classid 1:1 htb rate 1mbit ceil 1mbit // Creating a class for bandwidth limiting
tc qdisc add dev ifb0 parent 1:1 handle 2: netem delay 100ms limit 64 // Attaching a netem qdisc to the class to introduce delay and queue limits
The problem is that I don’t see any traffic going through the netem qdisc. I’m sure my understanding of how traffic flows is incorrect. How do I approach my problem?
Darshil Kaneria is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.