I’m very new to programming. Just read some articles about algorithm for converting a decimal or string to binary. But,according to what I know, computers hold these data in binary form. So why don’t we just read raw binary in the memory? Isn’t it much easier and faster?
If possible, how to do that in C or Assembly?
I just need the binary form of input number, for example: check if a positive number is a power of two or not. My idea is: if N=2^k (N and k are positive integers) then binary form of N is 100…0 (k times of 0).
2
We do work with binary representations of numbers internally, but ultimately people want to use a computer program, and there the format isn’t usable. So we do need to have code for converting string representations of things into binary, but only for new data entered by the user.
People are extremely bad at remembering information-dense representations – that’s why we invented the alphabet in the first place. So if the user enters a price or age, they come into the program as a sequence of characters. To do any arithmetic on them, the computer has to convert them to numbers, which on virtually all digital computer means creating the bit pattern corresponding to that number.
Far more often the inverse problem arises: the program has stored a value as a number, but the user would like to see the human-readable digit sequence. This means the computer has to format the number back into the input syntax (or remember the original representation).
It is true that the entire software business would become easier if people could learn to understand binary representations, but that isn’t in the cards until we invent some serious neuroprogramming trickery, so for the time being we are stuck with converting numbers back and forth all the time.
1
Yes, it’s true. Both strings and decimals, as well as all other types, are represented in binary.
However, the problem lies in the significance of the binary. You cannot simply read a string as a decimal, or at least you shouldn’t if you were looking to get the decimal representation of that decimal.
Take for example this string in ASCII: “1.5”
In binary, this is represented by an array of bytes of size 4 or greater.
1 . 5
31 2E 35 00
If this were to be interpretted as a double, the value would be:
4.07659398311e-315
What does this have to do with the number 1.5? They share the same binary message, but since binary is only as good as its interpretation, this double is meaningless. This interpretation is complicated further when you consider that some systems prefer to save decimal information in big-endian or little-endian formats, meaning you’re also dependent on the system you’re working on.
In order to convert this into a double, you’d have to parse this binary as it was intended. As it turns out, the equivalent conversion of 1.5 in hexadecimal is 0x3FF8000000000000.
If you prefer precision, then you generally tend to retain only the string form in your program. Similarly, if you need to use this value in calculations, you keep the float or double form. So what happens if you need both? You keep both in your program, using whichever form is best suited for precision or calculation. Long past are the days when keeping both in memory might be an issue (save for few exceptions where speed is critical, such as in the case of drivers).
If you wanted to determine whether or not a string representation of a number is a power of 2, the calculation to do so would be more complicated than the simple act of converting it first to a number. Once it is a number, determining whether or not it is a power of two is trivial, with any binary string containing exactly one 1 (in the case of decimal, you should check also for binary strings containing exactly one 0 for negative decimal values).
4
I will answer your question.
So why don’t we just read raw binary in the memory?
But we are reading raw binary – calculations, netowrking, it is all binary-based representation. I will go a step further and say that computers ultimately use only binary representation.
Isn’t it much easier and faster? If possible, how to do that in C or Assembly?
Let’s consider it.
Assume that we have a two buttons keyboard with 0 and 1, how the code for reading will looks like? (pseudocode, warning!):
function readNumber() {
int number;
while(someStopCondition()) {
number = (number << 1) | readClick()
}
return number;
}
Is is a correct idea what you want to achive? You have to consider multiple types of numbers, like decimals too. How to user will input an 2.1?
01000000000001100110011001100110
Yes, this is 2.1. The point is computers are for humans, so they should solve problems for us without making it more complex.