After googling for a couple hours, I came to a conclusion that all sorting algorithms are inherently sequential which can be data distributed but not task distributable.
Is there any algorithm which is not inherently sequential and is task distributable?
5
You overlooked sleep-sort which is task distributed. Here is an implementation for the Bourne shell:
input="10 4 5 1"
for n in $input; do
(sleep $n; echo $n) &
done
When the program completes, the sorted list of numbers is printed on the standard output. (Note that you could need to add job management to determine when the subprocesses finish.)
6
Algorithms that are based on trial and error should fit your description, as the trials can be done in parallel.
Examples would be:
-
Bogosort, which shuffles the data until it’s sorted
-
StackSort, which looks for sort algorithms on stackoverflow, running them one by one until a correct answer is returned
Leave joking:
Mergesort certainly is a good candidate to implement a parallel algorithm.
A general algorithm for parallel sorting could be like this :
- Divide the data in k junks (where k is the number of processing agents).
- Sort each junk in parallel.
- Merge the sorted junks.
As @amon points out in his comment, the third step can even be executed partly in parallel with step 2, if the sort algorithm selected returns the small elements first.
See Knuth’s discussion of polyphase merge sort with replacement selection in volume 2 of ACP. Or google “external sort”. These sorts go way back to the early days of computing when computers often didn’t have enough memory to sort all the data, but had multiple tape drives attached. If you replace tape drives with inter-process data flows, the algorithms still work wonderfully. I implemented this algorithm in the early 1980s for a system that did not have an OS-supplied sorting program.