I am writing a program that requires me to know the execution time of a function, given any number of loops. Upon fine tuning I discovered that the execution time is non-linear, and I would like to know why. I rewrote the code into something I could share on here, and ran simulations on it and got pretty much the same results. Here is the code:
void DummyLoop(long y)
{
long counter = 0;
long DummyArray[3];
char loopArray;
while(counter < y)
{
for(loopArray = 0; loopArray < 3; loopArray++)
{
DummyArray[x] = 10000/50; // Some int division operation
}
counter++;
}
}
I measured the results using an internal timer of the MCU, and call the loop like this:
T0CON0bits.EN = 1; // Start timer
DummyLoop(10000);
T0CON0bits.EN = 0; // Stop timer
The results are then printed over UART once the timer is stopped. These are the times I measured executing the same function with a growing number of loops. All these results were measured individually for different loop count inputs.
I added a column in there multiplying the time it took for one loop by the amount of loops for that test, just to show how the results change. It does taper out after about 1000 loops, but why does the result change so much? I would have thought the time it takes to run one loop could be multiplied for any number? The MCU I am using is an 8-bit PIC18F47Q43 for reference, compiled with XC8.