Why don’t computers store decimal numbers as a second whole number?

Computers have trouble storing fractional numbers where the denominator is something other than a solution to 2^x. This is because the first digit after the decimal is worth 1/2, the second 1/4 (or 1/(2^1) and 1/(2^2)) etc.

Why deal with all sorts of rounding errors when the computer could have just stored the decimal part of the number as another whole number (which is therefore accurate?)

The only thing I can think of is dealing with repeating decimals (in base 10), but there could have been an edge solution to that (like we currently have with infinity).

5

There are actually modes of numbers that do that.

Binary-coded decimal (BCD) arithmetic has the computer work in base 10. The reason you run into this rarely is that it wastes space: each individual digit of a number takes a minimum of four bits, whereas a computer could otherwise store up to 16 values in that space. (It can also be slower, but it’s possible to have hardware-accelerated BCD math that works just fine.). This is, in fact, exactly what most calculators do, which is why there are certain classes of rounding problems you’ll never hit on a $5 Casio that will eat your lunch on a desktop computer.

The other route you can take is to use rational numbers–that is, a numerator and a denominator, stored as integers. This is actually available in nearly all languages, is exact, and allows you to store everything in native binary formats. The problem is that, at the end of the day, users probably do not want to see fractions like 463/13, nor even 35 and 8/13. They want to see 35.615…, and the moment you get there, you face all the typical problems. Add in that this format takes even more space, and can be significantly slower than floating point arithmetic, and you’ll find no computers use this format by default.

So: computers can do what you want, but it’s slow and it wastes space, so they only do it when they really have to. The rest of the time, the speed and space savings of floating point are a better trade-off.

8

There are numerous ways of storing fractional numbers, and each of them has advantages and disadvantages.

Floating-point is, by far, the most popular format. It works by encoding a sign, a mantissa, and a signed base-2 exponent into integers, and packing them into a bunch of bits. For example, you could have a 32-bit mantissa of 0.5 (encoded as 0x88888888) and a 32-bit signed exponent of +3 (0x00000003), which would decode to 4.0 (0.5 * 2 ^ 3). Floating-point numbers are fast, because they are implemented in hardware, and their precision scales with absolute size, that is, the smaller the number, the better your absolute precision, so the relative rounding error stays constant with absolute size. Floats are excellent for values sampled from a continuous domain, such as lengths, sound pressure levels, light levels, etc., and because of that, they are commonly used in audio and image processing, as well as statistical analysis and physics simulations. Their biggest downside is that they are not exact, that is, they are prone to rounding errors, and they cannot accurately represent all decimal fractions. All the mainstream programming languages have a float point of some sort.

Fixed-point works by using sufficiently large integers and implicitly reserving a part of their bits for the fractional part. For example, a 24.8 bit fixed-point number reserves 24 bits for the integer part (including sign), and 8 bits for the fractional part. Right-shifting that number by 8 bits gives us the integer part. Fixed-point numbers used to be popular when hardware floating-point units were uncommon or at least much slower than their integer counterparts. While fixed-point numbers are somewhat easier to handle in terms of exactness (if only because they are easier to reason about), they are inferior to floats in pretty much every other regard – they have less precision, a smaller range, and because extra operations are needed to correct calculations for the implicit shift, fixed-point math today is often slower than floating-point math.

Decimal types work much like floats or fixed-point numbers, but they assume a decimal system, that is, their exponent (implicit or explicit) encodes power-of-10, not power-of-2. A decimal number could, for example, encode a mantissa of 23456 and an exponent of -2, and this would expand to 234.56. Decimals, because the arithmetic isn’t hard-wired into the CPU, are slower than floats, but they are ideal for anything that involves decimal numbers and needs those numbers to be exact, with rounding occurring in well-defined spots – financial calculations, scoreboards, etc. Some programming languages have decimal types built into them (e.g. C#), others require libraries to implement them. Note that while decimals can accurately represent non-repeating decimal fractions, their precision isn’t any better than that of floating-point numbers; choosing decimals merely means you get exact representations of numbers that can be represented exactly in a decimal system (just like floats can exactly represent binary fractions).

Rational numbers store a numerator and a denumerator, typically using some sort of bignum integer type (a numeric type that can grow as large as the computer’s memory constraints allow). This is the only data type out of the bunch that can accurately model numbers like 1/3 or 3/17, as well as operations on them – rationals, unlike the other data types, will produce correct results for things like 3 * 1/3. The math is pretty straightforward, although coming up with an efficient factoring algorithm is rather challenging. Some programming languages have rational types built into them (e.g. Common Lisp). Downsides of rationals include that they are slow (many operations require reducing fractions and factoring their components), and that many common operations are hard or impossible to implement, and most implementations will degrade the rational to a float when this happens (e.g. when you call sin() on a rational).

BCD (Binary Coded Decimal) uses “nibbles” (groups of 4 bits) to encode individual digits; since a nibble can hold 16 different values, but decimal numbers require only 10, there are 6 “illegal” values per nibble. Like decimals, BCD numbers are decimal-exact, that is, calculations performed on decimal numbers work out just like they would if you did them using pen and paper. Arithmetic rules for BCD are somewhat clumsy, but the upside is that converting them to strings is easier than with some of the other formats, which is especially interesting for low-resource environments like embedded systems.

Strings, yes, plain old strings, can also be used to represent fractional numbers. Technically, this is very similar to BCD, only that there’s an explicit decimal dot, and you use one full byte per decimal digit. As such, the format is wasteful (only 11 out of 256 possible values are used), but it is easier to parse and generate than BCD. Additionally, because all the used values are “unsuspicious”, harmless, and platform-neutral, string-encoded numbers can travel over networks without problems. It is uncommon to find arithmetic being done on strings directly, but it is possible, and when you do it, they are just as decimal-exact as the other decimal formats (decimals and BCD).

4

Floating point numbers represent a vast range of values, which is very useful when your don’t know ahead of time what the values might be, but it’s a compromise. Representing 1/10^100 with a second integer wouldn’t work.

Some languages (and some libraries) have other characteristics. Lisp traditionally has infinite precision integers. Cobol has calculations with fixed point decimal numbers.

You have to select your number representation appropriate to the problem domain.

0

It sounds like you’re describing fixed-point numbers.

Bear in mind that storing the fractional part of a number in a separate location is precisely identical to creating a single space, twice as long, and storing the whole and fractional part in the two separate halves. In other words, it’s identical to storing the number as an integer but simply assuming a fixed number of decimal spaces.

Normally floating-point numbers are stored using a binary variation on scientific notation because what usually matters is significant digits. Many other methods exist though. Fixed-point decimal numbers are commonly used for example for storing currency values, where accuracy is critical up to a certain whole number of decimal places but the number of required decimal digits never changes.

The short answer is that floating point was designed for scientific calculations. It can store a number with (up to) a specified number of significant digits, which fits closely with how precision is measured in most scientific calculations.

That tends to be supported in hardware largely because scientific calculations have tended to be the ones that benefited the most from hardware support. For one example, financial calculations are often done with other formats — but financial software usually does little enough real calculation that even though the necessary formats are only supported in software, performance remains perfectly adequate for most financial software.

That would be called BCD, I think you can still use it if you really want to.
However it’s not really worth it as:

  1. You’ll very rarely run into a rounding error with 64 bit floating point
  2. It makes the arithmatic complex and inefficient
  3. It wastes 6 values every 4 bits

1

Trang chủ Giới thiệu Sinh nhật bé trai Sinh nhật bé gái Tổ chức sự kiện Biểu diễn giải trí Dịch vụ khác Trang trí tiệc cưới Tổ chức khai trương Tư vấn dịch vụ Thư viện ảnh Tin tức - sự kiện Liên hệ Chú hề sinh nhật Trang trí YEAR END PARTY công ty Trang trí tất niên cuối năm Trang trí tất niên xu hướng mới nhất Trang trí sinh nhật bé trai Hải Đăng Trang trí sinh nhật bé Khánh Vân Trang trí sinh nhật Bích Ngân Trang trí sinh nhật bé Thanh Trang Thuê ông già Noel phát quà Biểu diễn xiếc khỉ Xiếc quay đĩa Dịch vụ tổ chức sự kiện 5 sao Thông tin về chúng tôi Dịch vụ sinh nhật bé trai Dịch vụ sinh nhật bé gái Sự kiện trọn gói Các tiết mục giải trí Dịch vụ bổ trợ Tiệc cưới sang trọng Dịch vụ khai trương Tư vấn tổ chức sự kiện Hình ảnh sự kiện Cập nhật tin tức Liên hệ ngay Thuê chú hề chuyên nghiệp Tiệc tất niên cho công ty Trang trí tiệc cuối năm Tiệc tất niên độc đáo Sinh nhật bé Hải Đăng Sinh nhật đáng yêu bé Khánh Vân Sinh nhật sang trọng Bích Ngân Tiệc sinh nhật bé Thanh Trang Dịch vụ ông già Noel Xiếc thú vui nhộn Biểu diễn xiếc quay đĩa Dịch vụ tổ chức tiệc uy tín Khám phá dịch vụ của chúng tôi Tiệc sinh nhật cho bé trai Trang trí tiệc cho bé gái Gói sự kiện chuyên nghiệp Chương trình giải trí hấp dẫn Dịch vụ hỗ trợ sự kiện Trang trí tiệc cưới đẹp Khởi đầu thành công với khai trương Chuyên gia tư vấn sự kiện Xem ảnh các sự kiện đẹp Tin mới về sự kiện Kết nối với đội ngũ chuyên gia Chú hề vui nhộn cho tiệc sinh nhật Ý tưởng tiệc cuối năm Tất niên độc đáo Trang trí tiệc hiện đại Tổ chức sinh nhật cho Hải Đăng Sinh nhật độc quyền Khánh Vân Phong cách tiệc Bích Ngân Trang trí tiệc bé Thanh Trang Thuê dịch vụ ông già Noel chuyên nghiệp Xem xiếc khỉ đặc sắc Xiếc quay đĩa thú vị
Trang chủ Giới thiệu Sinh nhật bé trai Sinh nhật bé gái Tổ chức sự kiện Biểu diễn giải trí Dịch vụ khác Trang trí tiệc cưới Tổ chức khai trương Tư vấn dịch vụ Thư viện ảnh Tin tức - sự kiện Liên hệ Chú hề sinh nhật Trang trí YEAR END PARTY công ty Trang trí tất niên cuối năm Trang trí tất niên xu hướng mới nhất Trang trí sinh nhật bé trai Hải Đăng Trang trí sinh nhật bé Khánh Vân Trang trí sinh nhật Bích Ngân Trang trí sinh nhật bé Thanh Trang Thuê ông già Noel phát quà Biểu diễn xiếc khỉ Xiếc quay đĩa
Thiết kế website Thiết kế website Thiết kế website Cách kháng tài khoản quảng cáo Mua bán Fanpage Facebook Dịch vụ SEO Tổ chức sinh nhật