2016-08-23 36 views
0

我仍然不太确定的事情是MPI Scatter/Scatterv中的根进程发生了什么。MPI Scatterv:如何处理根进程?

如果我在我的代码中尝试分割数组,我是否需要将根进程包含在接收器的数目中(因此使得sendCounts的大小为nproc)还是被排除?

在用于矩阵乘法我的例子的代码,我仍然通过运行到异常行为的方法之一得到一个错误,过早地终止该程序:

void readMatrix(); 

double StartTime; 
int rank, nproc, proc; 
//double matrix_A[N_ROWS][N_COLS]; 
double **matrix_A; 
//double matrix_B[N_ROWS][N_COLS]; 
double **matrix_B; 
//double matrix_C[N_ROWS][N_COLS]; 
double **matrix_C; 
int low_bound = 0; //low bound of the number of rows of each process 
int upper_bound = 0; //upper bound of the number of rows of [A] of each process 
int portion = 0; //portion of the number of rows of [A] of each process 


int main (int argc, char *argv[]) { 

    MPI_Init(&argc, &argv); 
    MPI_Comm_size(MPI_COMM_WORLD, &nproc); 
    MPI_Comm_rank(MPI_COMM_WORLD, &rank); 

    matrix_A = (double **)malloc(N_ROWS * sizeof(double*)); 
    for(int i = 0; i < N_ROWS; i++) matrix_A[i] = (double *)malloc(N_COLS * sizeof(double)); 
    matrix_B = (double **)malloc(N_ROWS * sizeof(double*)); 
    for(int i = 0; i < N_ROWS; i++) matrix_B[i] = (double *)malloc(N_COLS * sizeof(double)); 
    matrix_C = (double **)malloc(N_ROWS * sizeof(double*)); 
    for(int i = 0; i < N_ROWS; i++) matrix_C[i] = (double *)malloc(N_COLS * sizeof(double)); 

    int *counts = new int[nproc](); // array to hold number of items to be sent to each process 

    // -------------------> If we have more than one process, we can distribute the work through scatterv 
    if (nproc > 1) { 

     // -------------------> Process 0 initalizes matrices and scatters the portions of the [A] Matrix 
     if (rank==0) { 
      readMatrix(); 
     } 
     StartTime = MPI_Wtime(); 
     int counter = 0; 
     for (int proc = 0; proc < nproc; proc++) { 
      counts[proc] = N_ROWS/nproc ; 
      counter += N_ROWS/nproc ; 
     } 
     counter = N_ROWS - counter; 
     counts[nproc-1] = counter; 
     //set bounds for each process 
     low_bound = rank*(N_ROWS/nproc); 
     portion = counts[rank]; 
     upper_bound = low_bound + portion; 
     printf("I am process %i and my lower bound is %i and my portion is %i and my upper bound is %i \n",rank,low_bound, portion,upper_bound); 
     //scatter the work among the processes 
     int *displs = new int[nproc](); 
     displs[0] = 0; 
     for (int proc = 1; proc < nproc; proc++) displs[proc] = displs[proc-1] + (N_ROWS/nproc); 
     MPI_Scatterv(matrix_A, counts, displs, MPI_DOUBLE, &matrix_A[low_bound][0], portion, MPI_DOUBLE, 0, MPI_COMM_WORLD); 
     //broadcast [B] to all the slaves 
     MPI_Bcast(&matrix_B, N_ROWS*N_COLS, MPI_DOUBLE, 0, MPI_COMM_WORLD); 


     // -------------------> Everybody does their work 
     for (int i = low_bound; i < upper_bound; i++) {//iterate through a given set of rows of [A] 
      for (int j = 0; j < N_COLS; j++) {//iterate through columns of [B] 
       for (int k = 0; k < N_ROWS; k++) {//iterate through rows of [B] 
        matrix_C[i][j] += (matrix_A[i][k] * matrix_B[k][j]); 
       } 
      } 
     } 

     // -------------------> Process 0 gathers the work 
     MPI_Gatherv(&matrix_C[low_bound][0],portion,MPI_DOUBLE,matrix_C,counts,displs,MPI_DOUBLE,0,MPI_COMM_WORLD); 
    } 
... 
+1

您的'matrix_A'是'double **',因此它不适合'MPI_Scatterv()'的第一个参数的配置文件。 'matrix_A [0]'可能有,但是因为你使用了一个'malloc()'循环来分配内存,所以它不是连续存储的,因此不能这样使用。 – Gilles

回答

1

根过程也发生在接收器侧。如果您对此不感兴趣,只需设置sendcounts[root] = 0即可。

请参阅MPI_Scatterv了解您必须精确传递哪些值的具体信息。

但是,照顾你在做什么。我强烈建议你改变你分配你的矩阵为一维数组的方式,使用单一的malloc这样的:

double* matrix = (double*) malloc(N_ROWS * N_COLS * sizeof(double)); 

如果你仍然想使用一个二维数组,那么你可能需要将您的类型定义为MPI derived datatype

如果您想在单个MPI传输中发送多行数据,则您传递的数据类型无效。

随着该缓冲液含有的MPI_DOUBLE值的连续数组MPI_DOUBLE你告诉MPI。

由于您使用多个malloc调用分配二维数组,因此您的数据不是连续的。