Basically I have a parallel loop that has a couple inherently serial variables that I pre-calculate. However I want to be able to set off a given iteration of the for loop once it’s serial variables are calculated.
Essentially I want something like this:
int num_completed = 0;
int serial_values[100];
#pragma omp parallel
{
if(omp_get_thread_num() == 0){
for(int i = 0; i < 100; i++){
serial_values[i] = i;
num_completed++;
}
}
#pragma omp for
for(int j = 0; j < 100; j++){
while(true){
if(j < num_completed)
break;
}
int serial = serial_values[j]
do parallel loop
}
}
This does work, but the for loop allocates work to thread 0, even though it’s tied up doing the serial calculations. Mostly that means it is slower because thread 0 has to calculate its share of the parallel loop in addition to the serial variables.
Also I know the spin lock isn’t great but I couldn’t think of anything better off the top of my head, if you have suggestions I’d welcome those too.
I’ve already tried using #pragma omp single nowait
and it functions the same way. I’ve also tried #omp section
but each section is meant to be executed by a single thread, not in parallel.
7
You are already using a spin lock to make the threads in the parallel loop wait for the serial computation to proceed far enough. One way to move forward would be to leverage that, plus OpenMP scheduling parameters, to make the threads in the parallel loop perform the serial computation. That might look something like this:
int num_completed = 0;
int serial_values[100];
#pragma omp parallel for schedule(monotonic:static, 1)
for (int j = 0; j < 100; j++) {
int serial;
while (1) {
if (j == num_completed) {
// serial computation:
serial = j
serial_values[j] = serial;
num_completed += 1;
break;
}
}
// do parallel loop
}
The schedule(monotonic:static, 1)
is relatively important here, because it ensures that the iterations of the parallel loop are split among the threads in single-iteration chunks, and that each thread executes its assigned chunks in logical iteration order. You could also use schedule(monotonic:dynamic)
or, equivalently, schedule(monotonic:dynamic, 1)
. With chunks larger than one iteration, you could have threads delayed unnecessarily long (and that may be an issue for your original code). With chunks executed out of order, you would likely get deadlocks on those spin locks.
I’ve also tried
#omp section
but each section is meant to be executed by a single thread, not in parallel.
Yes, but you can put a nested parallel
region inside one of the sections. I think this would probably be inferior to the approach described above, but it could yield a structure more similar to your original code. Something like this, for example:
int num_completed = 0;
int serial_values[100];
#pragma omp parallel num_threads(2)
{
#pragma omp sections
{
#pragma omp section
{
for (int i = 0; i < 100; i++) {
serial_values[i] = i;
num_completed++;
}
}
#pragma omp section
{
#pragma omp parallel for schedule(monotonic:static, 1)
for (int j = 0; j < 100; j++) {
while (true) {
if (j < num_completed) break;
}
int serial = serial_values[j]
// do parallel loop ...
}
}
}
}
You can put a num_threads()
clause on that parallel for
loop too, if you like.
5
The approach based on ordered regions as suggested by @Joachim looks appropriate here:
int serial_values[100];
#pragma omp parallel for ordered schedule(static,1)
for(int i = 0; i < 100; i++){
#pragma omp ordered
{
serial_values[i] = i;
}
int serial = serial_values[i]
//do parallel stuff
}
An omp ordered
directive is similar to a omp critical
: it can be executed by only one thread at a time, but in addition the iteration i
has to be executed before the iteration j
if i<j
. You don’t even need the num_completed
variable.
The scheme here will be the following with 4 threads:
Thread0: S0->P0-------->S4->P4-------->S8->P8-------->
Thread1: S1->P1-------->S5->P5-------->
Thread2: S2->P2-------->S6->P6-------->
Thread3: S3->P3-------->S7->P6-------->
There can be no overlap between the serial parts because they are protected in an ordered region. At the beginning of the execution some threads are inactive, but there’s no way to avoid that.
If you have a lot of iterations, then using larger chunks can help.
This approach is less efficient if the time spent in the serial part is “important” and/or if there are a lot of threads.
A task based solution seems somewhat easier; something like this
/*
* An example solution using tasks for the StackOverflow question
* /questions/78978654/how-to-combine-omp-section-and-for/
*/
#include <omp.h>
static problem()
{
#define PROBLEM_SIZE 100 /* A literal in the original :-( */
#define CHUNK_SIZE 7 /* As an example that doesn't divide
* the problem size, to show why
* some of the code is needed. */
int serial_values[PROBLEM_SIZE];
#pragma omp parallel
{
#pragma omp single nowait
{
int lastBase = 0;
for (int i = 0; i < PROBLEM_SIZE; i++) {
serial_values[i] = compute_serialValue(i);
if ((i == PROBLEM_SIZE-1) || ((i%CHUNK_SIZE) == CHUNK_SIZE-1)) {
#pragma omp task
{
for (int j=lastBase; j<=i; j++) {
int serial = serial_values[j];
doParallelComputation(j, serial);
}
}
lastBase = i+1;
}
}
}
}
}
Obviously the size of the chunks is a tuning issue.