Consider the following printf invocation:
printf("%.2f", 0.995);
what should it print? 0.99 or 1.00?
Both glibc and whatever Microsoft offers with MSVC produce 0.99. The C standard (1999 version) says:
A double argument representing a floating-point number is converted to decimal notation in the style [−]ddd.ddd, where the number of digits after the decimal-point character is equal to the precision specification. … The value is rounded to the appropriate number of digits.
but it doesn’t say what kind of rounding. The same wording is used in the C standard draft N2310. So, what is the rounding logic? My intuition is that we should see banker’s rounding w.r.t. the digit preceding the 5, so that 995 is rounded to 1000 and .995 becomes 1. Why would it be otherwise?