Recently, I’ve been pondering whether using “just-fit” data types is truly beneficial.
For example, if I have data that will only ever reach a maximum value of 255, I could use a byte
(or int8
, language-agnostic speaking). However, I believe that most of us would simply use int
(typically int32
) without giving much thought to spending extra time optimizing the data type.
This leads me to wonder: why do we even have all these different numeric data types? Is there a tangible benefit to using a smaller, more precise data type, in terms of memory efficiency?
I’d appreciate insights on whether this practice is worthwhile, especially in different programming environments. I am a full-stack developer (JS and Go, most of the time), and not understanding much about embedded systems and game development, please feel free to share your approach.
I’ve attempted to measure memory usage with various data types, but I didn’t observe any significant improvements (also the impact on CPU usage) across the following languages:
- C#
- Java
- Go
- Haskell
P.S.: I understand that languages using the CLR (C#, JVM) are different from compiled languages (like C, Go, etc.). Please feel free to share any insights you might have on both types of systems.
0
There are a number of factors that could affect performance.
I) Some programming languages specify that an integer operation take place as a full integer. so
BYTE1 + BYTE2
Become
Convert BYTE1 to 4BYTE1
Convert BYTE2 to 4BYTE1
4BYTE1 + 4BYTE2 = 4BYTE3
Convert 4BYTE3 to BYTE1
In such cases you get less efficiency.
II) Processors vary as to whether there is a performance boost doing 1-byte arithmetic compared to 4- or 8-byte arithmetic.
III) Some processors (say “hi” Intel) are very cache sensitive. Using small integers can allow your data to fit within the processor cache so that you can get a performance boost, even if you have to convert bytes to words to do arithmetic.
1