We have 1 byte
, which is 8 bits
, which is 2^8
. Now 2 bytes
should be 2 * 1 byte
, which is 2 * 2^8 = 2^9
, but actually 2 bytes is 2^16
. What am I missing here? It seems like 2 bytes
isn’t 2 * 1 byte
, it’s more like 1 byte * 1 byte
, but this should give you byte^2
, which doesn’t make sense.
Can please someone explain to me what I am getting wrong here?
4
You have one byte, which is 8 bits
So far so good
which is 2^8
Your use of “which is” here seems to be the root of your confusion. A more precise statement is:
8 bits can represent 2^8 distinct values.
Or generally:
N bits can represent 2^N distinct values.
If this is unclear, it may help you to think about decimal digits (0-9). One decimal digit can represent 10 distinct values (0-9), two decimal digits can represent 10^2=100 distinct values (0-99), N decimal digits can represent 10^N distinct values.
A bit is literally a binary digit. One bit can represent two distinct values (0-1), two bits can represent four distinct values, and N bits can represent 2^N distinct values.
So eight bits can represent 2^8 distinct values, and sixteen bits can represent 2^16 distinct values.
There are two aspects: the size and the number of representable states. Increasing the size by one bit multiply the number of representable states by 2 (you get all the previous state with the additional bit set to 0, and again all the previous states with the additional bit set to 1). By consequence, doubling the size squares the number of representable states (you have N possible states for the first element which have to be combined with N states for the second, thus in total N²). If you think in decimal, adding one decimal digit multiply the number of values by 10 (and doubling the number of decimal digits still squares the number of values).
With 8 bits bytes:
-
1 byte is 8 bits and can represent values between 0 up to 2^8 (not included).
-
2 bytes is 16 bits and can represent values between 0 up to 2^16 (not included).
If you had a machine with 2 decimal digit bytes:
-
1 byte would be 2 decimal digits and you could represent values between 0 up to 100 (not included).
-
2 bytes would be 4 decimal digits and you could represent values between 0 up to 10000 (not included).
5
If 1 byte were indeed 2^8
, then 2 bytes is not 2*(2^8)
, but 2^(2*8)
.
Where you went wrong is in assuming that the unit “byte” is exactly equivalent to the formula 2^8
. 2^8 is only the number of distinct states that you can represent with 8 bits.
In fact, “byte” as a unit is equivalent to 8 bit
(and not 2^8
). Therefore, 2 bytes is equivalent to 2*8 bit
, and you can represent 2^(2*8)
distinct states with these.
(Off-topic: To be somewhat more precise, a byte does not necessarily have to be 8 bits wide; that is only the most wide-spread size for bytes today. If you want a unit that always, unconditionally means exactly 8 bits, that would be the “octet”.)
Byte
is a term that refers to storage capacity. The contents of a byte-sized storage location is what you do math on. Doubling the value stored in a byte-sized bucket may or may not require two bytes of capacity to store the result. Doubling 2^8 results in 2^9 which requires the combined capacity of two bytes to store. Doubling 2^6 results in 2^7 which requires only one byte to store.