I have the following C function which multiples two unsigned shorts together and stores the result into an unsigned integer. This function then prints whether the resulting value has the most significant bit set or not.
void unsigned_short_mult_test(unsigned short a, unsigned short b) {
unsigned int x = a * b;
if (x >= 0x80000000)
printf("%u >= %u", x, 0x80000000);
else
printf("%u < %u", x, 0x80000000);
}
For the test, I pass the value of 65535
for a
and b
in main()
:
int main() {
unsigned short a = 65535;
unsigned short b = 65535;
unsigned_short_mult_test(a, b);
return 0;
}
With compiler optimizations enabled (-O1
or above), this always prints “incorrectly”:
4294836225 < 2147483648
However, this makes sense to me because it seems like the compiler promotes a
and b
to integers during the multiplication, then casts the result back into an unsigned integer to be stored in x
. The optimization assumes the product of two integers cannot be larger than the maximum value of an integer and simply removes the first part of the if statement from the resulting machine code. However, when I place the same code into main
, this optimization does not occur:
int main() {
unsigned short a = 65535;
unsigned short b = 65535;
// The same code as unsigned_short_mult_test
unsigned int x = a * b;
if (x >= 0x80000000)
printf("%u >= %u", x, 0x80000000);
else
printf("%u < %u", x, 0x80000000);
return 0;
}
This prints the correct result: 4294836225 >= 2147483648
Why does this optimization only seem to occur in the function but not when done directly in main
?