First of all English is not my native language, so if I make any grammatical mistakes sorry about that, and also this code migh be inefficient but I am newbie to MPI please don’t hit me hard.
This code working without splitting channels however, for now my aim is to scatter first two rows to the processes 0 and 1, and last two rows to the processes 2 and 3. Once I manage this I’ll do reverse logic to the column wise.
If needed I can put whole code in here, please let me know.
I have helper functions for the 2D array:
template <typename T> T **alloc2D(int n, int m) {
T *data = new T[n * m];
T **array = new T *[n];
for (int i = 0; i < n; i++)
array[i] = &(data[i * m]);
return array;
}
template <typename T> void free2D(T **array) {
if (array != nullptr) {
delete[] array[0];
delete[] array;
}
}
File reading:
template <typename T>
void readMatrixFromFile(std::string fileName, T **matrix, int numRows,
int numCols) {
std::ifstream file(fileName);
if (!file.is_open()) {
std::cerr << "Failed to open file: " << fileName << std::endl;
return;
} else {
std::string line;
int row = 0;
while (std::getline(file, line) && row < numRows) {
std::istringstream iss(line);
int col = 0;
int value;
while (iss >> value && col < numCols) {
matrix[row][col] = value;
col++;
}
row++;
}
}
}
Two channel creation:
MPI_Comm rowGroupCom;
int rowGroup = rank % 2;
MPI_Comm_split(MPI_COMM_WORLD, rowGroup, rank, &rowGroupCom);
Matrix read and creating ptrs:
int **matrixA = nullptr;
int **localA = nullptr;
int *sendRowsPtr = nullptr;
int numRowsPerProc = numRows / (size / 2);
if (rowGroup == 0) {
matrixA = alloc2D<int>(numRows, numCols);
readMatrixFromFile("matrixA.txt", matrixA, numRows, numCols);
if (rank == 0) {
sendRowsPtr = &(matrixA[0][0]);
} else {
sendRowsPtr = &(matrixA[numRowsPerProc][0]);
}
} else {
sendRowsPtr = NULL;
}
and with the help of this post scattering rows:
MPI_Datatype rootRow, rootRowType, finalRow, finalRowType;
if (rowGroup == 0) {
MPI_Type_contiguous(numCols, MPI_INT, &rootRow);
MPI_Type_commit(&rootRow);
MPI_Type_create_resized(rootRow, 0, numCols * sizeof(int), &rootRowType);
MPI_Type_commit(&rootRowType);
}
MPI_Type_contiguous(numCols, MPI_INT, &finalRow);
MPI_Type_commit(&finalRow);
MPI_Type_create_resized(finalRow, 0, numCols * sizeof(int), &finalRowType);
MPI_Type_commit(&finalRowType);
MPI_Scatter(sendRowsPtr, numRowsPerProc, rootRowType, &(localA[0][0]),
numRowsPerProc, finalRowType, 0, rowGroupCom);
Freeing some stuffs:
MPI_Barrier(MPI_COMM_WORLD);
if (rank == 0) {
MPI_Type_free(&rootRow);
MPI_Type_free(&rootRowType);
}
MPI_Type_free(&finalRow);
MPI_Type_free(&finalRowType);
if (rank == 0) {
free2D(matrixA);
}
free2D(localA);
However when I compile and run it with valgrid I’m getting this:
[endeavourOSLap:108999:0:108999] Caught signal 11 (Segmentation fault: address not mapped to object at address 0xffffffffffffff80)
Rank 0 received row matrix:
0 1 2 3
4 5 6 7
Rank 2 received row matrix:
8 9 10 11
12 13 14 15
==== backtrace (tid: 108999) ====
0 0x000000000004b5ce ucs_event_set_fd_get() ???:0
1 0x000000000004b7aa ucs_event_set_fd_get() ???:0
2 0x000000000003cae0 __sigaction() ???:0
3 0x0000000000133fe3 ompi_coll_tuned_scatter_intra_dec_fixed() ???:0
4 0x00000000000d5729 MPI_Scatter() ???:0
5 0x0000000000002702 main() /home/bestsithineu/Documents/cse574/final/trial.cpp:118
6 0x0000000000025c88 __libc_init_first() ???:0
7 0x0000000000025d4c __libc_start_main() ???:0
8 0x00000000000022f5 _start() ???:0
=================================
--------------------------------------------------------------------------
prterun noticed that process rank 1 with PID 108999 on node endeavourOSLap exited on
signal 11 (Segmentation fault).
--------------------------------------------------------------------------
Desired output:
Rank 0 received row matrix:
0 1 2 3
4 5 6 7
Rank 1 received row matrix:
0 1 2 3
4 5 6 7
Rank 2 received row matrix:
8 9 10 11
12 13 14 15
Rank 3 received row matrix:
8 9 10 11
12 13 14 15