I know. The following words in C11 describe the concept of “object representation”:
6.2.6.1p4:
Values stored in non-bit-field objects of any other object type consist of n × CHAR_BIT bits, where n is the size of an object of that type, in bytes. The value may be copied into an object of type unsigned char [n] (e.g., bymemcpy
); the resulting set of bytes is called the object representation of the value. Values stored in bit-fields consist of m bits, where m is the size specified for the bit-field. The object representation is the set of m bits the bit-field comprises in the addressable storage unit holding it.
I’ll give an example of this passage based on my understanding:
Suppose our system uses two’s complement,Then for:
char a = -128;
The object representation of the value -128 should be: 1 0 0 0 0 0 0 0
Just as the standard states:
6.2.6.1p3:
Values stored in unsigned bit-fields and objects of type unsigned char shall be represented using a pure binary notation.
So this object representation should represent pure binary, and the sign bit should be regarded as a value bit. (It’s just my understanding)
But if there is still:
unsigned char b = 128;
The object representation of the value 128 should likewise be: 1 0 0 0 0 0 0 0
Are the object representations of these two values the same? This is impossible because the standard mentions:
6.2.6.1p4:
Two values (other than NaNs) with the same object representation compare equal, but values that compare equal may have different object representations.
Then where did I go wrong in my understanding above? I believe there are some mistakes in my understanding of “object representation”. Maybe it’s very elementary, but I would be very grateful if you could point it out.
2
The text in 6.2.6.1p4 is implicitly referring to two values of the same type having the same object representation.
In general, different values of different types may happen to have the same object representation, e.g. 3.14 and 0x4048f5c3 ; or negative and positive integers as in your example