Im working on developing autoscaling for Azure Hyperscale Gen5 sql dbs.
For that, I need to monitor the cpu usage, to know when to scale its vcores up or down .
There are 2 sql cpu metrics – “cpu_percentage” and “sql_instance_cpu_percent” .
From what i read, sql_instance_cpu_percent shows the cpu usage of the server as a whole including system services etc, and cpu_percentage is per a specific process.
Meaning, the sql_instance_cpu_percent would always be HIGHER than cpu_percentage.
However, whe running a load test on a db (lots of reads,write,deletions), Im seeing the cpu_percentage go way up above sql_cpu_instance_percent.
Why could that be ? is cpu_percentage maybe a better metrics to consider when planning scaling operations ?