From this SO answer it is my understanding that the following C code is bad practice because declaring c
as a char
will make a char like 0xFF and the EOF
constant indistinguishable.
char c;
while ((c = getchar()) != EOF) {
...
}
However in this SO answer detailing the type conversion process when two integral operands of different types are being operated on (in this case with !=), the two types are converted to a common type. Typically to the type with more bits.
Question 1:
Why then if the char
0xFF was found in a file would it not be converted to type int
, and then compared to EOF
?
Question 2:
In a case where the char
0xFF, or, in binary, 1111 1111
is converted to an int
(let’s say 16 bits for simplicity), does it then become 0000 0000 1111 1111
? If so, would a signed char lose its sign and change in decimal value from -1 to 255?
1