Why do modern compilers prefer SSE over FPU for single floating-point operations
I recently tried to read assemblies of the binary of my code and found that a lot of floating-point operations are done using XMM registers and SSE instructions. For example, the following code: