Instead of programming the way we do, why don’t we make specifications of common tasks such as “sorting”, and then let the environment compile it to make best use of it’s hardware? This way, we could ship computer with new specialized hardware such as sorting networks, and it would automatically work with existing code.
9
First of all, computers come with specialized hardware. Every laptop and desktop computer sold for quite a few years now has a specialized co-processor, a Graphics Processing Unit, that handles visual-processing algorithms, such as video and gaming applications require. Very large computers (e.g., “supercomputers”, IBM’s System Z family) have a variety of specialized processors to handle numerical processing (“vector processing”), etc.
Secondly, sorting is one of the best-researched aspects of computing, and turns out to be far too complex to build into hardware for more than the simplest cases. Sorting is all about speed and correctness. Speed depends on the choice of algorithm, the type and variation in the data, and the volume of data. Correctness depends on the type and context of the data. It is positively trivial to sort a medium-sized array of integers that fit inside the CPUs native word size (e.g., 31 or 63 bits plus sign). Sorting character strings that contain more than merely ASCII values is extremely complex – IBM published a 500+ page book 20 years ago just discussing the issues of character sets in the context of national boundaries and common usage. And then there’s the question of non-contiguous data – sorting a linked list involves chasing pointers all over memory.
The main issue is that sorting algorithms (1) need a lot of flexibility, and (2) would be very difficult to accelerate using hardware anyway.
One thing is that sorting algorithms are already easily fast enough to outrun the memory bandwidth of the processor – the processor will already spend a large proportion of its time waiting for data to move backwards and forwards to main memory. A hardware-accelerated sorting co-processor or a special sorting instruction would have the same problem.
The way this memory bandwidth is being addressed is by using better algorithms and data structures that have better “locality”, and there’s still significant work being done in this field, particularly “cache oblivious algorithms” (they are oblivious in the sense that they work well irrespective of the details of caching, whereas “cache aware” algorithms are tuned for a particular cache page size etc).
In contrast, media applications (audio and graphics, particularly 3D graphics) make use of some very repetitive structures – of course there’s flexibility, but it’s built on top of a large and very well-structured foundation. That allowed graphics acceleration to start simple with things like Blitting (a configurable but still very structured block copy operation) and line/polygon drawing. It meant that as graphics and sound processing got more sophisticated, vector operations became an obvious target for optimisation – first MMX (vectors of integers) then SSE (vectors of floats). It meant there was a pretty well defined structure for how a 3D graphics engine worked when the old fixed-function 3D graphics pipeline was moved onto 3D graphics hardware.
Yet with 3D graphics, what was once done in hardware is now done in software for flexibility – shaders are software, for example, which is how we get a massive range of different shaders giving the appearance of different materials. However, that software still works in a much more structured way than general software, and therefore can still use a much more specialised hardware platform. That’s why your graphics card can now accelerate everything from physics to cracking passwords – applications that also fit that same model and can be implemented efficiently using the instruction sets that modern graphics processors provide.
Graphics processors now are the spiritual or actual descendants of digital signal processors, which were (and probably still are) a kind of specialised processor for dealing with digital signals (e.g. audio).
Which leads to a final point – sorting algorithms can be accelerated by hardware. Depending on your data, sorting can be handled using MMX or SSE (single-instruction-multiple-data) instructions on your processor, but there’s probably not much point because of the memory bandwidth issue – maybe you can be a bit more power-efficient that way, though. However, you could also use your graphics hardware. That way, you can benefit from the often much better memory bandwidth for graphics cards. You won’t be able to replace all sorts this way, but it’s certainly possible and probably being done where appropriate.
IOW because of the various economic and practical issues, designing hardware specifically to accelerate a relatively narrow task like sorting doesn’t really make sense. A feature that accelerates a wider range of tasks, or which makes existing acceleration hardware applicable to a wider range of tasks often makes much more sense.
But they do! They’re called instruction set extensions. (Stuff like SSE and the like)
Certain tasks have very nice implementations in software. Usually those implementations are good enough to do the job, so no specialized hardware is necessary.
If you’d make some kind of specialized hardware, you’d need to have a very wide range of applications to make it worthwhile.
If you look at hardware that could make this work, my guess would be that you’d be looking at something like FPGAs. As you can see with FPGAs, the chip would become much more expensive while it wouldn’t be applicable for many applications.
3