This is kind of a lazy question on my part, since I gave up after trying only a few google terms.
But when I read about multi-threaded performance, I read about context switching being bad for performance since the cache is thrashed or whatever.
My question is, is the cache always rendered useless after a context switch? What about when the context being switched to shares much of the stack and maybe even data which was being processed in the previous context? Would grouping thread pools by task or category of task help performance because it increased this effect?
Or is the cache always flushed no matter what because of a possible (inherent?) switch of address space or something like that?
4
Is the cache always rendered useless after a context switch?
No, not at all.
Context switches are actually a concept of the software that runs on a CPU, not one of the CPU itself. When the software decides to do a context switch, the state of the CPU is saved somewhere convenient, a new or previously-saved state is loaded and execution jumps to the address where the switched-to process was interrupted. That’s a bit oversimplified, but the major point is that the CPU is just following whatever instructions it’s told to execute and doesn’t know or care that the effect of those instructions is a context switch.
The cache is the domain of the CPU, which has its own rules for what it keeps in which cache and when evictions or commits to other caches or RAM happen. Say you have a single-core, single-cache CPU running a supervisor and two processes. If the code and data for all three fit in the cache, all three could context switch among themselves forever without ever having to reach outside the processor. This is because what’s in the cache remains valid between time slices. The only reason RAM would ever have to be touched is to make data visible off the processor, such as for peripherals doing I/O.
What about when the context being switched to shares much of the stack and maybe even data which was being processed in the previous context?
Contexts may operate in the same code and data space if they’re threads of the same process. One thing they can’t share is a stack, so each gets its own and the stack pointer is changed during a context switch. If there is data in common that’s being used frequently enough, it will remain in the cache. This effect is leveraged when designing software for very high performance, but it has the drawback of depending on CPU-specific features.
Would grouping thread pools by task or category of task help performance because it increased this effect?
It can, depending on the machine architecture, the workload and how you manage it. You want to run the same context on the same core as often as possible because it increases the odds of a cache hit, and cache hits improve performance. That doesn’t mean misses are automatically bad, but the cost has to be balanced with that of waiting in line for the same core, which may have already evicted what the context was using to socket-wide cache that can be accessed just as quickly by other cores.
You see this sort of thing on a macro scale in NUMA systems, where operating systems will try to assign contexts to the same socket to avoid cache invalidation and data transfer between the sockets.