2015-04-12 46 views
0

我想知道为什么我无法从MPI_Recv命令访问数据。我有一个100个元素的阵列,我想将其分成8个进程。由于100/8返回不等长度的块,我手动执行此操作。我然后计算块并将它们分别提交给每个进程。每个进程然后对数组的块执行一个动作,让我们说重新组合它,然后返回它的重新组合成一个原始数组。该程序运行良好,直到我必须将从属进程的结果组合在一起。我特别想访问哪个刚刚被从返回的数组处理访问来自MPI_Irecv()的数据

for (i=1; i<numProcs; i++) { 
MPI_Irecv (&msgsA[i], 1, MPI_INT, MPI_ANY_SOURCE, tag, MPI_COMM_WORLD, &recv_req[i]); 
MPI_Irecv (&msgsB[i], 1, MPI_INT, MPI_ANY_SOURCE, tag+1, MPI_COMM_WORLD, &recv_req[i]); 
MPI_Irecv (chunk, n, MPI_DOUBLE, MPI_ANY_SOURCE, tag+2, MPI_COMM_WORLD, &recv_req[i]); 

// how to access chunk, take part from msgsA[i] to msgsB[i] and assign to a part of a different array?? 

} 

整个代码

#include <mpi.h> 
#include <stdio.h> 
#define MAXPROCS 8 /* max number of processes */ 

int main(int argc, char *argv[]) 
{ 
int i, j, n=100, numProcs, myid, tag=55, msgsA[MAXPROCS], msgsB[MAXPROCS], myStart, myEnd; 
double *chunk = malloc(n*sizeof(double)); 
double *K1 = malloc (n*sizeof(double)); 

MPI_Init(&argc, &argv); 
MPI_Comm_size(MPI_COMM_WORLD, &numProcs); 
MPI_Comm_rank(MPI_COMM_WORLD, &myid); 

if(myid==0) { 
/* split the array into pieces and send the starting and finishing indices to the slave processes */ 
for (i=1; i<numProcs; i++) { 
    myStart = (n/numProcs) * i + ((n % numProcs) < i ? (n % numProcs) : i); 
    myEnd = myStart + (n/numProcs) + ((n % numProcs) > i) - 1; 
    if(myEnd>n) myEnd=n; 
    MPI_Isend(&myStart, 1, MPI_INT, i, tag, MPI_COMM_WORLD, &send_req[i]); 
    MPI_Isend(&myEnd, 1, MPI_INT, i, tag+1, MPI_COMM_WORLD, &send_req[i]); 
} 
/* starting and finish values for the master process */ 
myStart = (n/numProcs) * myid + ((n % numProcs) < myid ? (n % numProcs) : myid); 
myEnd = myStart + (n/numProcs) + ((n % numProcs) > myid) - 1; 

for (i=1; i<numProcs; i++) { 
    MPI_Irecv (&msgsA[i], 1, MPI_INT, MPI_ANY_SOURCE, tag, MPI_COMM_WORLD, &recv_req[i]); 
    MPI_Irecv (&msgsB[i], 1, MPI_INT, MPI_ANY_SOURCE, tag+1, MPI_COMM_WORLD, &recv_req[i]); 
    MPI_Irecv (chunk, n, MPI_DOUBLE, MPI_ANY_SOURCE, tag+2, MPI_COMM_WORLD, &recv_req[i]); 

// --- access the chunk array here, take part from msgsA[i] to msgsB[i] and assign to a part of a different array 

} 
//calculate a function on fragments of K1 and returns void 

/* Wait until all chunks have been collected */ 
MPI_Waitall(numProcs-1, &recv_req[1], &status[1]); 
} 

else { 
    //calculate a function on fragments of K1 and returns void 

    MPI_Isend (K1, n, MPI_DOUBLE, 0, tag+2, MPI_COMM_WORLD, &send_req[0]); 
    MPI_Wait(&send_req[0], &status[0]); 
} 
MPI_Finalize(); 
return 0; 
} 

回答

0

我想我找到了解决办法。导致问题的原因是MPI_Irecv()。对于非阻塞接收器,我无法访问块变量。所以解决方案似乎只是

MPI_Status status[MAXPROCS]; 

for (i=1; i<numProcs; i++) { 
MPI_Irecv (&msgsA[i], 1, MPI_INT, MPI_ANY_SOURCE, tag, MPI_COMM_WORLD, &recv_req[i]); 
MPI_Irecv (&msgsB[i], 1, MPI_INT, MPI_ANY_SOURCE, tag+1, MPI_COMM_WORLD, &recv_req[i]); 
MPI_Recv (chunk, n, MPI_DOUBLE, MPI_ANY_SOURCE, tag+2, MPI_COMM_WORLD, &status[i]); 

//do whatever I need on chunk[j] variables 
} 
+0

这不是一个解决方案。你的代码仍然有多个问题。我建议你通过阻塞所有非阻塞调用(例如'MPI_Recv')来替换所有非阻塞调用(例如'MPI_Irecv')。 –