Well, I am reading Programing Windows with MFC, and I came across Unicode and ASCII code characters. I understood the point of using Unicode over ASCII, but what I do not get is how and why is it important to use 8bit/16bit/32bit character? What good does it do to the system? How does the processing of the operating system differ for different bits of character.
My question here is, what does it mean to a character when it is a x-bit character?
3
It relates to the amount of possible letters/numbers/symbols a character set can have. An 8-Bit character can only have 256 possible characters. Whereas a 16-bit can have 65,536. A 32-bit character can have 4,294,967,296 possible characters. A character set that large should be able to store every possible character in the world. Whereas a 8-bit character set can only store enough characters for the English language. 8-bit character sets were the preferred standard in the early days of computing where memory was measured in bits and in some cases, KB. But with computers with multi-core processors and gigs of RAM it is not such a concern anymore (except in some rare cases)
5
Joel Spolsky, co-founder of this site, actually has a wonderful and brief article on character sets titled The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!).
I found it to be a good read, and it answers many questions about different character sets that you may have, as well as briefly going over the historical reasons for the whole character set mess anyway.
I’m not sure why you’re being voted down; it’s pretty useful stuff to know!
1
Unicode is logically a 21-bit code. As modern computers don’t conveniently work with such units, there are various solutions: use 32 bits (4 bytes), wasting a lot of bits, especially if your data is dominantly in English; use a special scheme that uses one or two 16-bit units per character; and use a variable number of 8-bit bytes per character. These are known as UTF-32, UTF-16, and UTF-8 transfer encodings.
Windows uses internally UTF-16, whereas UTF-8 dominates e.g. on the Web, so you often need to convert between them. This is nontrivial but usually made with suitable library routines, maybe implicitly, depending on programming environment. UTF-32 is rarely used.
Technically, UTF-16 is very simple for all characters that fit into the 16-bit subspace of Unicode, Basic Multilingual Plane (BMP)—quite possibly all characters you ever heard of. UTF-8 is more complex but has been design with Western emphasis: all Ascii characters are represented as single bytes in UTF-8, so any file that contains dominantly Ascii is of almost the same size in UTF-8 as in Ascii. This is opposite to UTF-16, which always uses two bytes per Ascii character.
6