The problem is distinguishing the end of the input from valid data. The
solution is that getchar returns a distinctive value when there is no more input,
a value that cannot be confused with any real character. This value is called EOF,
for “end of file.” We must declare c to be a type big enough to hold any value
that getchar returns. We can’t use char since c must be big enough to hold
EOF in addition to any possible char. Therefore we use int.
c is the variable in which we are reading getchar();
Why ? all the ASCII chars are between 1 and 128, no negative ascii value has -1 as its decimal representation, so why use int ?
sure if you are using extended ascii, or if you think char is unsigned you have reason to use it, but in general there is no need to do it.