Context
I was researching memcpy bandwidth of different platforms on different buffer sizes, and some of the runs showed much worse results, despite me doing what seemed like an appropriate amount of reads/writes, and there being virtually no background activity in the OS.
Further research revealed that the noise was coming from unoptimal L2 utilization due to uneven L2 set usage. Here is how I came to that conclusion: for one particular platform, I obtained Virtual->Physical mappings, and extracted bits from the physical addresses of the copied cachelines to get the L2 set index. The number of full L2 sets correlated nicely with bandwidth drops.
The L2 in question is 8-way associative, with 2 banks, total size 256KB. This means there are 256K / 2 / 64 / 8 = 256
sets in each bank. Indexing 256 banks requires 8 bits. Since 6 bits are used as offset, and 1 bit selects the L2 bank, this means of the 8 bits, 8 - (12 - (6 + 1)) = 3
bits come from the upper 48 bits of the physical address (that is, the part that is actually different from the virtual address). And while the virtual addresses in memcpy are obviously sequential, the physical addresses do not have to be (and aren’t, I looked at the mapping), which explains the slowdown.
The question
However, I observed this behaviour only on some of the platforms. For example, Cortex-A73 doesn’t seem to demonstrate this behaviour at all (I suspect the L2 is sliced), and on Zen4 and Intel Tiger Lake I see very mild performance drops (<10%), while other platforms (RISC-V cores in particular) slowed down by up to 50%!
The slowdown makes sence to me. What I don’t get is how do these other platform deal with it? From my reading, L2 is not usually sliced on platforms where it’s not the LLC, and straight bits from the address are used for indexing instead. For any L2 that is larger than PAGE_SIZE x ASSOCIATIVITY x BANKS
, this would mean there are set conflicts possible, depending on the VA->PA mapping. Take Zen4: it has a 1MB, 8-way associative L2 cache. Depending on the amount of banks, this would mean that up to 6 bits have to come from the part of the upper 48 bits of the address.
Is the operating system somehow taking the L2 indexing scheme into account when setting up the pagemap?
Do all large L2 caches secretly have a lot of of banks?
Am I missing something?