2016-11-14 7 views
2

我只是在学习C++中的环形拓扑MPI。我写了一个C++脚本来计算10维蒙特卡洛积分并计算其均值和局部最大值。我的目标是通过“环”传递每个proessors的本地最大值。现在MPI C++环形拓扑发送和接收不同的值,而只传递相同的值?

,我还是没弄清楚如何从存储在阵列运行时不同的处理器生成的最高值,所以我编译和执行代码一次,并手动进行与值的数组。

接下来我想通过环中的每个数组值,并最终计算全局最大值。 现在我只是尝试传递第一个数组值,并且我看到处理器发送相同的值但接收到不同的值。我真的不知道C++是否使用了MPI库,并且我使用C语言跟随了MPI的在线教程,并且在C++代码中使用了与C相同的结构。

我在这里分享这个代码。

#include <iostream> 
#include <fstream> 
#include <iomanip> 
#include <cmath> 
#include <cstdlib> 
#include <ctime> 
#include <mpi.h> 
using namespace std; 


//define multivariate function F(x1, x2, ...xk)    

double f(double x[], int n) 
{ 
    double y; 
    int j; 
    y = 0.0; 

    for (j = 0; j < n-1; j = j+1) 
     { 
     y = y + exp(-pow((1-x[j]),2)-100*(pow((x[j+1] - pow(x[j],2)),2))); 

     }  

    y = y; 
    return y; 
} 

//define function for Monte Carlo Multidimensional integration 

double int_mcnd(double(*fn)(double[],int),double a[], double b[], int n, int m) 

{ 
    double r, x[n], v; 
    int i, j; 
    r = 0.0; 
    v = 1.0; 
    // initial seed value (use system time) 
    //srand(time(NULL)); 


    // step 1: calculate the common factor V 
    for (j = 0; j < n; j = j+1) 
     { 
     v = v*(b[j]-a[j]); 
     } 

    // step 2: integration 
    for (i = 1; i <= m; i=i+1) 
    { 
     // calculate random x[] points 
     for (j = 0; j < n; j = j+1) 
     { 
      x[j] = a[j] + (rand()) /((RAND_MAX/(b[j]-a[j]))); 
     }   
     r = r + fn(x,n); 
    } 
    r = r*v/m; 

    return r; 
} 




double f(double[], int); 
double int_mcnd(double(*)(double[],int), double[], double[], int, int); 



int main(int argc, char **argv) 
{  

    int rank, size; 

    MPI_Init (&argc, &argv);  // initializes MPI 
    MPI_Comm_rank (MPI_COMM_WORLD, &rank); // get current MPI-process ID. O, 1, ... 
    MPI_Comm_size (MPI_COMM_WORLD, &size); // get the total number of processes 


    /* define how many integrals */ 
    const int n = 10;  

    double b[n] = {5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0,5.0};      
    double a[n] = {-5.0, -5.0, -5.0, -5.0, -5.0, -5.0, -5.0, -5.0, -5.0,-5.0}; 

    double result, mean; 
    int m; 

    const unsigned int N = 5; 
    double max = -1; 


    cout.precision(6); 
    cout.setf(ios::fixed | ios::showpoint); 


    srand(time(NULL) * rank); // each MPI process gets a unique seed 

    m = 4;    // initial number of intervals 

    // convert command-line input to N = number of points 
    //N = atoi(argv[1]); 


    for (unsigned int i=0; i <=N; i++) 
    { 
     result = int_mcnd(f, a, b, n, m); 
     mean = result/(pow(10,10)); 

     if(mean > max) 
     { 
     max = mean; 
     } 
     //cout << setw(10) << m << setw(10) << max << setw(10) << mean << setw(10) << rank << setw(10) << size <<endl; 
     m = m*4; 
    } 

    //cout << setw(30) << m << setw(30) << result << setw(30) << mean <<endl; 
    printf("Process %d of %d mean = %1.5e\n and local max = %1.5e\n", rank, size, mean, max); 


    double max_store[4] = {4.43095e-02, 5.76586e-02, 3.15962e-02, 4.23079e-02}; 

    double send_junk = max_store[0]; 
    double rec_junk; 
    MPI_Status status; 


    // This next if-statment implemeents the ring topology 
    // the last process ID is size-1, so the ring topology is: 0->1, 1->2, ... size-1->0 
    // rank 0 starts the chain of events by passing to rank 1 
    if(rank==0) { 
    // only the process with rank ID = 0 will be in this block of code. 
    MPI_Send(&send_junk, 1, MPI_INT, 1, 0, MPI_COMM_WORLD); // send data to process 1 
    MPI_Recv(&rec_junk, 1, MPI_INT, size-1, 0, MPI_COMM_WORLD, &status); // receive data from process size-1 
    } 
    else if(rank == size-1) { 
    MPI_Recv(&rec_junk, 1, MPI_INT, rank-1, 0, MPI_COMM_WORLD, &status); // recieve data from process rank-1 (it "left" neighbor") 
    MPI_Send(&send_junk, 1, MPI_INT, 0, 0, MPI_COMM_WORLD); // send data to its "right neighbor", rank 0 
    } 
    else { 
    MPI_Recv(&rec_junk, 1, MPI_INT, rank-1, 0, MPI_COMM_WORLD, &status); // recieve data from process rank-1 (it "left" neighbor") 
    MPI_Send(&send_junk, 1, MPI_INT, rank+1, 0, MPI_COMM_WORLD); // send data to its "right neighbor" (rank+1) 
    } 
    printf("Process %d send %1.5e\n and recieved %1.5e\n", rank, send_junk, rec_junk); 


    MPI_Finalize(); // programs should always perform a "graceful" shutdown 
    return 0; 
} 

我编译:

mpiCC -std=c++11 -o hg test_code.cpp 
mpirun -np 4 ./hg 

输出看起来是这个过程中的不同意味着AMD最大,但我担心的是发送和recvd值现在:

Process 2 of 4 mean = 2.81817e-02 
and local max = 5.61707e-02 
Process 0 of 4 mean = 2.59220e-02 
and local max = 4.43095e-02 
Process 3 of 4 mean = 2.21734e-02 
and local max = 4.30539e-02 
Process 1 of 4 mean = 2.87403e-02 
and local max = 6.58530e-02 
Process 1 send 4.43095e-02 
and recieved 2.22181e-315 
Process 2 send 4.43095e-02 
and recieved 6.90945e-310 
Process 3 send 4.43095e-02 
and recieved 6.93704e-310 
Process 0 send 4.43095e-02 
and recieved 6.89842e-310 

我想我搞乱与C和C MPI使用++,我将不胜感激任何建议,我也没有看到任何良好的C++ MPI教程通过互联网,所以我的代码或教程链接一个很好的变形例将会非常有帮助。谢谢

回答

1

MPI_RecvMPI_Send的第三个参数是数据类型。现在您发送的是double,但您将数据类型设置为MPI_INT。在大多数系统中,int是4个字节,而double是8个字节,因此rec_junk中的一半字节未初始化。

要修复它,只需在MPI_RecvMPI_Send的所有调用中将MPI_INT更改为MPI_DOUBLE