As far as I recall, this has been the ongoing trend of the past, I’m just a student, so I might be wrong:
- Long ago: Integer numbers and calculations. Very precise, however cannot present a big range.
- At a later point: Added single-precision floating point numbers (
floats
), and also added FPU’s which are incredibly fast at calculating with them. - Quite recently(?): Double-precision floating point numbers (
doubles
) are starting to become more common. For example in OpenGL Shader Language they are steadily introducing more and more methods that work with doubles natively. I think they are also working on FPU’s for doubles?
But also there has been a change to 64-bit processing a few years ago, so I’ve got a few questions:
- Are there more options now to calculate with fixed-point numbers (maybe two int64 registers?)?
- Why are there, as far as I know, no specific units for fast integer calculation with respect to fixed-point numbers?
Maybe most of what I’m saying does not make much sense, but I think the question remains valid. Is there a switch we switch to fixed-point arithmetic in the near future?
9
No.
There is not a snowball’s chance in Hell that numerical computing will switch (back) to fixed-point arithmetic in the near future, or in ANY future.
There are several reasons for this, the biggest one being that doing fixed-point arithmetic requires that the programmer keep track of the final position of the decimal (binary) point in the calculation. This is a LOT of work. The whole point of floating-point arithmetic is to let the computer do the decimal (binary) point bookkeeping, and let the programmer worry about the algorithm.
Key concept: You do not want to run your computation out to the limits of machine accuracy and precision, as some computations take a LONG time to converge to that limit. You just want to get an answer that is “good enough” for your particular purposes. Usually, this is quite a bit less than what the machine in theory can do.
There do exist fast integer multipliers, in the world of integer digital signal processors. Hardware integer divide exists on some processors, but not all. There just isn’t that much call for it, as integer division is not that common, and there isn’t that much that call for it, compared to the on-chip real estate required to make it much faster than doing it in software, or compared to the difference in effort for integer division vs. floating-point division.
EDIT: To clarify, based on a comment: A floating-point divide unit requires an integer divider inside it, and then some extra stuff to handle normalization of the result mantissa and calculation of the result exponent. The extra stuff is small compared to the integer divider itself. Once you are providing floating-point addition, subtraction, and multiplication, and you decide to provide integer division, the extra cost of providing floating-point division is generally small.
The Control Data 6600 initially provided four-function floating point, and integer addition, subtraction, and multiplication, but did NOT provide integer division. It wasn’t seen as critically necessary for the machine’s core customer base. The CDC engineers later noticed that they could use the double-precision floating-point hardware to provide a reduced-range (48 bits vs. 60 bits) integer divide capability, almost for free.
One large Freescale (Motorola) integer DSP that I know of provides a “divide step” instruction: The programmer must stack as many of those instructions as he needs bits in his result to do a full-up division. It is a fair pain to use. I had to deal briefly with it a few years ago, while troubleshooting a normalized least mean squares (adaptive filter) application.
Read Hamming’s Numerical Methods for Scientists and Engineers. (Dover Press is the student’s friend.) In the very beginning of the book, he goes into some detail about why floating-point is used for numerical computation. Also, ask around the EE department, and see if they have a digital signal processing elective.
16