I have a few small shared compute workstations used for research work that is very compute heavy that I am trying to maximize the compute performance of when used in a SLURM cluster. These machines each have an 12th Gen Intel(R) Core(TM) i9-12900K which has asymmetric cores, 8 Performance Cores (P-Cores) that can Hyperthread and 8 Efficiency Cores (E-Cores) that are slower.
I want to restrict the queue/partition/ such that only Logical Cores Cpu0-Cpu16 are able to be used by the SLURM queue, since these cores are all the same speed when they are running unthreaded and when they are all running hyperthreads. Mixing in 1 slow cores kills parallel performance
When I set NodeName=node0 CPUs=16 RealMemory=64000 State=UNKNOWN
in the slurm.conf file, the physical cores 0,2,4,6,8,10,12,14,16 + some hyperthreads 1-7 + E-Cores 17-24 show up as in used (via top
) when running 16 core jobs. I want just cores 0-16 available. Is there an affinity setting somewhere to manage this?
I think this may be possible with numactl --cpunodebind
but haven’t been able to figure it out.
It would also but nice, but not required, if I can then add the 8 slower E-Cores to a different partition/queue so they can still be used, but never mixing P-Cores and E-Cores.
Output from Top during 2 8-Core SLURM Jobs
asmith is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.