Disclaimer: If you’re not into parallelization (on clusters), this question is probably not interesting to you and probably not worth a read.
TL;DR: I search for a communication model (preferably compatible with MPI
) that efficiently assures that each data line is processed once. Read the next paragraph as well to know what I mean by data line.
Consider the following problem: An algorithm takes a data line (in my case, an array of integers with fixed size) and produces various data lines which then need to be processed by the same algorithm. Each unique data line only needs to be processed once, since the outcome is purely deterministic. The set of unique data lines is finite, so after a finite number of recursive calls of this algorithm it produces the whole set. The goal is to find this set.
I’ve already implemented the algorithm and I’ve also implemented a parallelization. Obviously, since we need to process each unique data line, at each time all currently unprocessed data lines may be processed by different processors in parallel.
On the other hand, parallelization introduces the problem of keeping the different processors from doing redundant work.
A trivial implementation would send data lines to processors, collect all results and distribute again. This may be inefficient since the (possibly large) result lists have to be merged before any further work may be done. If all lists are returned around the same time, the master is overwhelmed by the sheer amount of data while all slaves idle until all merging is done.
My (as far as I think, improved) communication model is still basically Master-Slave:
This is what the Master does:
Distribute initial chunk of data lines to slaves
While there is at least one active slave
for each active slave
Request data line
if slave doesn't have a data line
set slave inactive, break
else
if the data line was previously processed
break
endif
if there is an inactive slave
send the data line to inactive slave and mark that slave as active
send the sending slave a notification to not process the data line
else
notify the slave to process the data line it just sent
endif
endif
endfor
endwhile
Each slave does this:
while true
wait for request
if data line available
send it
wait for master to say if the line should be processed here
process if necessary
go back to beginning
else
notify master
wait for master to send a data line
process the received line
go back to beginning
endif
The slave manages its own local list of processed and unprocessed data lines. For each request, one element of the unprocessed ones is moved to the processed ones. Only unique entries among both lists are stored.
In conclusion: Only the master knows about the full set of processed lines. a slave processes a data line if and only if the master says that it wasn’t previously processed. The Master doesn’t know about the results for each data line until it has requested all available lines from each slave.
As far as I can think, this model of communicating the list of processed data lines is pretty efficient, but I’m new to the world of parallelization, especially in the world of parallelization on a cluster.
What are other ways of efficiently communicating data lines among several processors to assure that each data line is processed exactly once?
This question is basically not bound to the combination of C++
and MPI
, though any answer reflecting the way the Message Passing Interface works are greatly appreciated.
Disclaimer: I do not have any real MPI / clustering experience.
Also beware of my possibly incorrect use of terminology.
My basic idea is to apply a hash function to the input and output data lines. Compute an integer K (between 1 and NumSlaves) from the hash to decide which one (K) of the slave (NumSlaves) should process this data line.
Each slave will have a hash-range. A data line will be processed by the slave if the data line’s hash falls into that slave’s hash-range.
This basic scheme requires slaves to communicate directly with each other. The master will act as a journal keeper, which receives a copy of computed outputs from each node but do not participate in task assignment.
When a slave derives one or more output data lines from one input data line, it will compute this hash function on each output data line and decide for each one a slave that will process it.
This basic scheme does not provide fault tolerance or changing number of slaves at run-time; unless some other distributed processing techniques (e.g. consistent hashing, etc) are used.
Each slave must cache all of the data lines that fall into its hash-range, to avoid re-computation.
This scheme may require more communication than your current scheme, but might be better at preventing duplicated work.
2