As far as I know, so-called “fat binaries”–executable files that contain machine code for multiple systems–are only really used on Apple PCs, and even there it seems like they only used them because they needed to transition from PowerPC to x86.
These days a lot of software is cross-platform, and it seems like making a single fat binary would be in many ways simpler than keeping track of a dozen or so different downloads for each combination of operating system and architecture, not to mention somehow conveying to the customer which one they want.
I can come up with plenty of guesses as to why this approach never caught on, for instance:
- A lack of cross-compilation tools making multi-OS binaries infeasible
- You need to test the code on each OS anyway, so you already have to have systems that can compile natively for each OS
- Apparently 32-bit programs “just work” on 64-bit machines already
- Dynamic linking works differently on each OS, so a “fat library” might not work even if a “fat application” would
But since I always work with a library or framework that hides all these OS-specific and architecture-specific details from me, I don’t know how true any of that really is, or if there are even more issues I don’t know about. So, what are the actual reasons why fat binaries aren’t generally used to create multi-architecture and/or multi-OS software? (outside of Apple)
10
A fat binary approach makes most sense if:
- Both architectures coexist on the same system
- Everything else is more or less the same for all architectures
That’s why they are not used for cross-platform code (both criteria don’t apply), or to support different Linux distributions with one binary (1. doesn’t apply, 2. applies to a certain degree).
On Linux, both criteria would still apply if you want to support both 32 and 64 bit on a single Linux distribution. But why bother, if you already have to support multiple distributions?
On Windows, the transition from 16 bit to 32 bit happened initially with the introduction of Windows NT, which was major deviation from the 16 bit Windows world in many regards (virtual memory, multi-user access control, API changes…). With all these changes, it was better to keep the 32 and 16 bit worlds separate. NT had already the concept of “subsystems” support different OS “personae” (Win32, POSIX), so making Win16 a third subsystem was a straightforward choice.
The Win32 to Win64 transition didn’t involve similar major changes, but Microsoft used a similar approach anyway, probably because it was proven and tried.
0
Internet age distribution logistics disincentivizes fat binaries in two ways:
-
The point of sale does not involve physical goods and therefore favor fewer SKU’s as is the case when products compete for retail shelf space and customers have limited opportunities to make a purchase.
-
The costs of bandwidth favors delivering just the minimum necessary bits for a particular software package. Shipping a fat binary down the wire degrades both customer experience and seller infrastructure efficiency.
Fat binaries made more sense when software was shrink wrapped physical media.
5
Part of the reasons why fat binaries did not succeed is that there is more than the ABI & processor (actually, instruction set) specifications to invalidate a binary executable. A binary executable often depends a lot on other resources, in particular dynamic libraries (see the DLL hell), external services (think of DBMS like PostGreSQL ….), system configuration (e.g. location of configuration files under /etc/
on Linux), etc. etc….
Just for Linux/x86-64 it is in practice difficult to make a binary executable able to run on every Linux distributions (because it is often tied to specific versions of libc
or of libstdc++
). FatELF exists but is not very successful.
Even with a well defined ABI and instruction set, the optimization would be different on various processor brands – see the -mtune=native
x86 optimization flag of GCC.
Apple partly succeeded in having fat binaries only because they provide a very closed eco-system of computing resources.
Free software is another way to solve your portability concern: If an application is free software (carefully coded for portability), it is quite easily ported to similar systems. And even if the original source code does not work as intended on your system, you could adapt it (or pay someone to do the work) usually reasonably easily (of course, free software tied to particular OS or ABI or processor is not easy to port, you’ll pay more efforts for that). And standards like POSIX or Linux Standard Base also help.
You could pay (or ask) someone to port some (free) software with available source code, but it is unrealistic to port a binary executable.
At last, several frameworks exist to help porting on several operating systems (provided source code is available), e.g. Qt & POCO.
Even using a well specified bytecode like the JVM is not always a guarantee of portability: some Java applications are well known to not be portable (e.g. because they expect some particular file hierarchy and naming).
BTW, computer systems are probably much less heterogeneous today than in the 1980s or early 1990s (or in the mainframe era).
At last, fat binaries are fat: you will spend a lot of resources (build time, bandwidth, executable size) for a portability issue that might not concern a lot of people. Remember the aphorism: “there is no portable software, only software which has been ported” (to some particular systems).
5