2012-04-13 42 views
3

我有一个与MPI并行的应用程序,它被分成许多不同的任务。每个处理器只分配一个任务,并且分配有相同任务的处理器组被分配给它自己的通信器。定期地,任务需要同步。目前,通过MPI_COMM_WORLD完成同步,但缺点是不能使用集体操作,因为无法保证其他任务能够达到该代码块。mpi从一个通信器到另一个通信器的集体操作

作为更具体的例子:

task1: equation1_solver, N nodes, communicator: mpi_comm_solver1 
task2: equation2_solver, M nodes, communicator: mpi_comm_solver2 
task3: file IO   , 1 node , communicator: mpi_comm_io 

我想MPI_SUM上任务1的阵列,并具有结果显示在TASK3。有没有一种有效的方法来做到这一点? (我很抱歉,如果这是一个愚蠢的问题,我没有太多的经验,创建和使用自定义MPI通信器)

回答

4

Charles是完全正确的;互相沟通者允许你在沟通者之间进行交谈(或者在这种情况下区分“正常”沟通者,“沟通者内部沟通者”,这对我没有太大的改进)。

我一直发现使用这些互联通讯器对于那些新手来说有点令人困惑。不是基本的想法,这是有道理的,但使用(说)MPI_Reduce与其中之一的机制。执行缩减的任务组指定远程通信器上的根级别,迄今为止非常好;但在远程排名沟通者,大家不是根指定MPI_PROC_NULL作为根,而实际的根指定MPI_ROOT。人们为了向后兼容而做的事情,嘿?

#include <mpi.h> 
#include <stdio.h> 


int main(int argc, char **argv) 
{ 
    int commnum = 0;   /* which of the 3 comms I belong to */ 
    MPI_Comm mycomm;  /* Communicator I belong to */ 
    MPI_Comm intercomm; /* inter-communicator */ 
    int cw_rank, cw_size; /* size, rank in MPI_COMM_WORLD */ 
    int rank;    /* rank in local communicator */ 

    MPI_Init(&argc, &argv); 
    MPI_Comm_rank(MPI_COMM_WORLD, &cw_rank); 
    MPI_Comm_size(MPI_COMM_WORLD, &cw_size); 

    if (cw_rank == cw_size-1)  /* last task is IO task */ 
     commnum = 2; 
    else { 
     if (cw_rank < (cw_size-1)/2) 
      commnum = 0; 
     else 
      commnum = 1; 
    } 

    printf("Rank %d in comm %d\n", cw_rank, commnum); 

    /* create the local communicator, mycomm */ 
    MPI_Comm_split(MPI_COMM_WORLD, commnum, cw_rank, &mycomm); 

    const int lldr_tag = 1; 
    const int intercomm_tag = 2; 
    if (commnum == 0) { 
     /* comm 0 needs to communicate with comm 2. */ 
     /* create an intercommunicator: */ 

     /* rank 0 in our new communicator will be the "local leader" 
     * of this commuicator for the purpose of the intercommuniator */ 
     int local_leader = 0; 

     /* Now, since we're not part of the other communicator (and vice 
     * versa) we have to refer to the "remote leader" in terms of its 
     * rank in COMM_WORLD. For us, that's easy; the remote leader 
     * in the IO comm is defined to be cw_size-1, because that's the 
     * only task in that comm. But for them, it's harder. So we'll 
     * send that task the id of our local leader. */ 

     /* find out which rank in COMM_WORLD is the local leader */ 
     MPI_Comm_rank(mycomm, &rank); 

     if (rank == 0) 
      MPI_Send(&cw_rank, 1, MPI_INT, cw_size-1, 1, MPI_COMM_WORLD); 
     /* now create the inter-communicator */ 
     MPI_Intercomm_create(mycomm, local_leader, 
           MPI_COMM_WORLD, cw_size-1, 
           intercomm_tag, &intercomm); 
    } 
    else if (commnum == 2) 
    { 
     /* there's only one task in this comm */ 
     int local_leader = 0; 
     int rmt_ldr; 
     MPI_Status s; 
     MPI_Recv(&rmt_ldr, 1, MPI_INT, MPI_ANY_SOURCE, lldr_tag, MPI_COMM_WORLD, &s); 
     MPI_Intercomm_create(mycomm, local_leader, 
           MPI_COMM_WORLD, rmt_ldr, 
           intercomm_tag, &intercomm); 
    } 


    /* now let's play with our communicators and make sure they work */ 

    if (commnum == 0) { 
     int max_of_ranks = 0; 
     /* try it internally; */ 
     MPI_Reduce(&rank, &max_of_ranks, 1, MPI_INT, MPI_MAX, 0, mycomm); 
     if (rank == 0) { 
      printf("Within comm 0: maximum of ranks is %d\n", max_of_ranks); 
      printf("Within comm 0: sum of ranks should be %d\n", max_of_ranks*(max_of_ranks+1)/2); 
     } 

     /* now try summing it to the other comm */ 
     /* the "root" parameter here is the root in the remote group */ 
     MPI_Reduce(&rank, &max_of_ranks, 1, MPI_INT, MPI_SUM, 0, intercomm); 
    } 

    if (commnum == 2) { 
     int sum_of_ranks = -999; 
     int rootproc; 

     /* get reduction data from other comm */ 

     if (rank == 0) /* am I the root of this reduce? */ 
      rootproc = MPI_ROOT; 
     else 
      rootproc = MPI_PROC_NULL; 

     MPI_Reduce(&rank, &sum_of_ranks, 1, MPI_INT, MPI_SUM, rootproc, intercomm); 

     if (rank == 0) 
      printf("From comm 2: sum of ranks is %d\n", sum_of_ranks); 
    } 

    if (commnum == 0 || commnum == 2); 
      MPI_Comm_free(&intercomm); 

    MPI_Finalize(); 
} 
+0

谢谢。这里的细节是赞赏。我想这需要一点时间来消化所有这些...... – mgilson 2012-04-13 20:19:41

4

所有你需要的是创建一个新的通讯器,包括来自两个任务,你想沟通的节点。看看MPI小组和传播者。你可以在网上找到很多例子,here for instance

+0

你有任何的估计是多少更有效地做到这一点相比于上做'mpi_comm_solver1'集体操作,然后用一个简单的'MPI_Send'将结果发送到其他任务吗? – mgilson 2012-04-13 18:16:55

相关问题