From the quote from Wikipedia, does “address translations” here mean the translation from virtual memory address to physical memory address?
Vector processors take this concept one step further. Instead of
pipelining just the instructions, they also pipeline the data itself.
The processor is fed instructions that say not just to add A to B, but
to add all of the numbers “from here to here” to all of the numbers
“from there to there”. Instead of constantly having to decode
instructions and then fetch the data needed to complete them, the
processor reads a single instruction from memory, and it is simply
implied in the definition of the instruction itself that the
instruction will operate again on another item of data, at an address
one increment larger than the last. This allows for significant
savings in decoding time.To illustrate what a difference this can make, consider the simple
task of adding two groups of 10 numbers together. In a normal
programming language one would write a “loop” that picked up each of
the pairs of numbers in turn, and then added them. To the CPU, this
would look something like this:execute this loop 10 times read the next instruction and decode it fetch this number fetch that number add them put the result here end loop
But to a vector processor, this task looks considerably different:
read instruction and decode it fetch these 10 numbers fetch those 10 numbers add them put the results here
There are several savings inherent in this approach. For one, only two
address translations are needed. Depending on the architecture, this can represent a significant savings by itself. Another saving is
fetching and decoding the instruction itself, which has to be done
only one time instead of ten. The code itself is also smaller, which
can lead to more efficient memory use.
Thanks!
Short answer: yes.
Long answer for the benefit of those who might not be familiar with the guts of a CPU:
In a modern architecture such as amd64, the CPU needs to take a virtual address that is basically an offset into a process’s address space and map it into the raw (physical) address space – this is likely what you meant by virtual memory (not swap space).
This distinction is done both to simplify program code and to insulate userspace programs from each other. Back in the DOS days one rogue program could crash the whole operating system. Modern operating systems and CPUs prevent this by giving each program its own virtual sandbox to play in and closely guarding shared resources such as memory, hard disk, the network interface, etc.
4