It is said, by Mike P. Wittie, in the course curriculum of computer architecture that,
Students need to understand computer architecture in order to
structure a program so that it runs more efficiently on a real machine
I’m asking to more experienced programmers or professionals that have a background in this topic:
How does learning computer architecture help you? What aspect benefits you most? How did learning computer architecture changed your programming structuring?
6
How does understanding physics help people drive a car?
- They understand phenomena like brake fade, and will compensate for it.
- They understand center of gravity and how tires grip the road.
- They understand hydroplaning, and how to avoid it.
- They know how to best enter and exit a curve.
- They are far less likely to tailgate.
And so on. You can drive a car without knowing much about physics, but understanding physics does make you a better driver.
Two example of how understanding computer architecture can affect the way you code:
- branch prediction
- Cache size and access patterns
14
It is basically the same reason as for understanding C and pointers or maybe even algorithms; the only difference is that if you know computer architecture you really understand pointers (actually pointers seem very trivial after knowing computer architecture).
I cannot say about myself that I am an experienced programmer but a (actually the) book on computer architecture I read was for me the most interesting book I have read, related to computers and programming. By understanding computer architecture you basically understand how everything is linked together, how a computer works and how come a program really does work; you see the big picture. Without computer architecture you cannot truly understand:
- memory management: heap, stack, virtual memory, memory hierarchy and the so spoken about pointers (why is there a stack overflow, why is recursion not so good etc.)
- assembly programming (if you want to program embedded)
- compilers and interpreters (if you want to understand optimizations and when it is useless to optimize on code because it is already being made by the compiler)
- linkers (dynamically linked libraries)
- operating systems (if you want to read Linux kernel code)
- the list can go on…
From my really subjective point of view it is by far more interesting and maybe even more useful than knowing algorithms.
2
In today’s world, this reasoning is negligible if it is present at all for the majority of programming situations.
The places that it is applicable is when one is writing assembly for a particular processor, working on a situation that requires one to take advantage of a particular architecture, or limited significantly by the architecture (embedded systems) so that the previous two points become all the more important.
For programmers of interpreted languages (perl, python, ruby) or languages that run in their own virtual machine (java, C#) the underlying machine is completely abstracted away. A Java program wouldn’t be coded differently to run on a massive cluster or on one’s desktop.
The cases where the architecture does make a difference as mentioned are embedded systems where it is necessary to consider very low level concerns that are for that environment. The other extreme also exists – where one is writing high performance code either in assembly or something that is compiled to native code (not running in a virtual machine). In these extremes, one is concerned with what fits into the processor cache and how fast it is to access different parts of memory, which way the branch prediction on the processor goes (if the processor uses branch prediction at all or delay slots).
The question of branch prediction and delay slots or processor cache does not enter in to the vast majority of programming issues and cannot enter into interpreted or virtual machine languages.
All that said, it is useful to understand a level of what is going on one deeper than the existing code is being written at. The further than that rapidly reaches diminishing returns. A Java programmer should understand a programming language with manual memory management and pointer math to understand what is going on under the covers. A C programmer should understand assembly so that one can realize what pointers really are and where memory really comes with. Assembly coders should be familiar with the architecture to understand what trade offs of branch prediction and delay slots mean… and to take it even further, those designing processors should be familiar with quantum mechanics for how semiconductors and gates work at a very basic level.
3
I’d go so far as to say that anyone who doesn’t understand computer organization is doomed to be a lousy programmer. You’ll know:
- how to organize your data structures and algorithms to be more cache efficient
- how a function call works, and the implications for calling convention
- the segments of memory, and their implications for variable declarations
- how to read assembly, and thus interpret the output of a compiler
- the effects of instruction-level parallelism and out-of-order instruction scheduling, and what that means for branching
Basically, you’ll learn how a computer actually works, and thus you’ll be able to map your code to it more effectively.
14
Understanding the principles of computer architecture requires learning many important principles of programming. Therefore, a knowledge of computer architecture is relevant to programming in any language, no matter how high level.
These important principles include:
- Fundamental data structures like arrays and stacks
- Program structure: Loops, conditionals, subroutines (jump and call)
- Considerations of time and space efficiency
- Systems: The way various components fit together through abstract interfaces. Apparently this is controversial, so I will elaborate. Take the instruction set, a construct with a general form (operands, addressing modes, encoding)
that is applicable to many different kinds of operations, such as arithmetic, logical, memory modification, and interrupt control.
This illustrates a general principle of system design, namely that systems are composed of individual subsystems that all share the same
abstract interface, and that abstract interfaces are capable of handling many specific components. This principle is also visible in a web application
which may store the same kind of object (abstract interface) in a database, in memory, or on a web page (subsystems). In each case, the abstract interface
specifies the general form without specifying the concrete detail. System design is the art of knowing what to make general and what to make specific. This
is a skill honed by designing and understanding systems — in any language and at any level.
15
Update 2018: How many Software Developers does it take to change a Lightbulb??? Who Cares!? That’s a Hardware Problem!
Generally NO, You don’t need to know computer architecture to be a good programmer, That’s more in the EE realm IMO.. unless of course you’re in embedded systems development, but in that case you’re married to the chip and programming right on it, so you’ll need to know the architecture of THAT “computer” (and even then it may not matter), but having a general architectural understanding of how computers work wont be good for much else than Water-hole discussions.
I would say it’s even less important these days at the rate hardware is declining in price and performance is improving / increasing and how quickly the technologies are changing and languages are evolving. Data structures and design patterns don’t really have much to do with physical hardware architecture as far as I know.
Generally Programmers come from a computer science background, in which case, they’ve more than likely taken computer architecture classes, but now-a-days, Operating Systems are going virtual, disk space is shared, memory is scaleable, etc.. etc..
I have been able to make a great career in programming(10+ years) and I have very little educational knowledge of computer architecture, mostly because… I was an Art major!!!
Update: Just to be fair, MY “little educational knowledge” came from my CPU Sci. Minor. and still, I’ve never needed to use anything I’ve learned from my Assembly classes or my Computer Architecture classes in my “Programming” career.
Even now as I play around with some Mesh Networking Idea’s implementing the ZigBee spec, I’ve found that using the products and tools available (XBee), I’m able to program in Python and plop the code right on chip (SoC) and do some really neat stuff with it.. ALL without having to worry about anything to do with actual architecture of the chips, etc.. there are definitely hardware limitations to be cognitive of because of the chip size and the intended low price target.. but even THAT will become less in the upcoming years. So I stand by my “Generally NO” answer
12
It can help quite a bit actually. Understanding concepts such as shared memory and inter-processor communication and the potential delays involved with these can help a programmer arrange their data and communicative methods to avoid relying heavily on these mechanisms, if needed. This is true for other areas of programming such as horizontal scaling, where distribution and communication among a program or system of programs is a main focal point.
Understanding the pitfalls or tar pits of a physical system can help you arrange such a program to help negotiate the physical systems as quickly and efficiently as possible. Simply throwing data into a communication queue and expecting it to scale is an undersight of what may really need to be put in place, especially if you must scale your software onto larger systems for better support.
Equally, understanding the benefits of something such as functional programming can really be exemplified in the light of understanding what’s going on a physical, systems level and thus makes even more traction for concepts such as these, in my opinion.
One last quick example could be understanding the concept of say stream-processing and how sending data off to a processing unit like a video card may be best done in a very specific manner such as: send off all required calculations, receive back the frame of data in one fell swoop. In something like video graphics or maybe even physics calculations you wouldn’t want to continually have an open communication with such a device; thus knowing this you would want to arrange this part of your program as such.
After all, if programmers did not understand these issues and road-blocks then these solutions would never exist in the format that they do.
0
Knowing your architecture allows you to know when something that’s being asked for is impossible.
I was once asked to write a program to communicate with a PIC over a PC serial port. The protocol would have the PIC sending nine-bit bytes, with no flow control. I would display in my program’s UI the values of the fields in the packets that the PIC sent. Obvious question: how do I read the ninth bit of each byte? The protocol engineer and I decided that we would try setting the parity to MARK and then treat the ninth bit as a parity bit. I would read the value of the ninth bit according to the success or failure of the parity check. Well, I implemented that, and it didn’t work. The values being displayed were obviously wrong. And after three continuous days of researching PC UART architecture, I found out why.
Here’s why. The PC UART buffers its input. By the time it interrupts the CPU to say “READY TO READ”, any number of bytes could have accumulated in its buffer. There is, however, only one Line Status Register to hold the value of the parity check. It is therefore impossible to tell which byte in the buffer failed the parity check. So: make sure that the buffer is only one byte long, you say? That’s a flow control setting, and I already mentioned that this protocol didn’t have flow control. If I didn’t know the architecture of the hardware I was programming, I would never have been able to say: “Reading the ninth bit is impossible and needs to be cut.”
Learning computer architecture helps immensely in programming.
Without understanding the environment the program is running in, your mental model is seriously handicapped. We see the world, not as it is, but as we are — through the mental model.
You won’t notice the difference in happy case scenarios, where everything just happens to work, but it will make a crucial difference when you are working on harder problems or debugging weird bugs (ie. real life programming).
It’s the difference between “WTF?” and “Ah, of course!”.
1