I am trying to set up a filtering bridge that makes some decisions on what to forward, and what to allow in (filter chain) based on which physical interface a packet came in on. How I thought this worked was that the bridge model in the kernel would attach the input interface (which could be physical, but I think could be also vlan1234@eth0) as metadata. I then thought I could use ‘meta iffname’ to set rules up like this:
iifname "br0" meta iifname { "eno1" , "enp2s0f0" } counter goto wan_input_rules
iifname "br0" meta iifname { "eno2", enp2s0f1 } counter goto ot_input_rules
iifname "br0" counter goto everything_else_input_rules
Evidentally it doesn’t actually work that way, because packets coming in on any interface wind up in the third rule, even if the physical input interface was eno1. I wondered if there was a way to debug this by logging the physical interface name but I couldn’t figure out how, so I tried to use nftrace, being uncertain whether or not nft monitor
would include any of the kernel packet metadata. It either does not, or else the metadata I think should be there is not there.
I do have ‘stp: false’ set on the bridge interface because I need to make sure I’m not sending spanning tree packets upstream. I don’t think that’s blocking the metadata, though.
I’m not exactly sure what to do next. What I really need is the ability to filter on both ‘input’ and ‘forward’ packets going across the bridge as well as packets coming into the host, based on which interface or vlan they came in on.
Is there some way I can ‘log’ from the rule and see the ‘meta iifname’ or see it via ‘nft monitor’ to try to get a better view of the packet nftables is operating on?
–C