If I make the following call with glibc
printf("%.-1f", 34.14);
it prints:
%0.-1f
is this correct? If so, why?
I believe that’s an error. the C standard (ISO/IEC 9899:1999) says:
§7.19.6.1
Each conversion specification is introduced by the character %. After the %, the following appear in sequence:
… snip …
– An optional precision … The precision takes the form of a period (.) followed either by an asterisk * (described later) or by an optional decimal integer;
… snip …
- As noted above, a field width, or precision, or both, may be indicated by an asterisk. In this case, an int argument supplies the field width or precision. … A negative precision argument is taken as if the precision were omitted.
Well, -1 is a negative integer; so it can legitimately be specified literally in the format string; and while the text explaining what to do with a negative specified precision appears in the clause regarding asterisk-indicated values – surely it should be the same logic as for literally-specified values (just like it is for widths).
So, I believe the precision value should be ignored in this cause, and the format string is the equivalent of “%f”, so the output should be:
34.140000
(default precision of 6)