MPI仅在root上初始化数组

时间:2015-01-03 15:57:01

标签: mpj-express

我有一个使用MPI express的Wafefront程序。该程序中发生的情况是,对于n x m的矩阵,有n个进程。每个进程都分配一行。每个过程都执行以下操作:

for column = 0 to matrix_width do:
    1) x = get the value of this column from the row above (rank - 1 process)
    2) y = Get the value left of us (our row, column-1)
    3) Add to our current column value: (x + y)

因此,在主进程中,我将声明一个n x m元素数组。因此,每个从属进程应分配长度为m的数组。但是因为它在我的解决方案中,每个进程必须分配一个n x m数组才能使分散操作起作用,否则我会得到一个nullpointer(如果我赋值null)或超出范围的异常(如果我用new int[1]实例化它)。我确信必须有一个解决方案,否则每个进程都需要与root一样多的内存。

我想我需要在C中使用可分配的东西。

重要部分下方标有" MASTER"。通常我会将初始化拉入if(rank == 0)测试并使用null初始化数组(不分配内存),但这不起作用。

package be.ac.vub.ir.mpi;

import mpi.MPI;
// Execute: mpjrun.sh -np 2 -jar parsym-java.jar

/**
 * Parallel and sequential implementation of a prime number counter
 */
public class WaveFront
{
    // Default program parameters
    final static int size = 4;
    private static int rank;
    private static int world_size;

    private static void log(String message)
    {
        if (rank == 0)
            System.out.println(message);
    }

    ////////////////////////////////////////////////////////////////////////////
    //// MAIN //////////////////////////////////////////////////////////////////
    ////////////////////////////////////////////////////////////////////////////
    public static void main(String[] args) throws InterruptedException
    {
        // MPI variables
        int[] matrix;         // matrix stored at process 0
        int[] row;            // each process keeps its row
        int[] receiveBuffer;  // to receive a value from ``row - 1''
        int[] sendBuffer;     // to send a value to ``row + 1''

        /////////////////
        /// INIT ////////
        /////////////////

        MPI.Init(args);
        rank = MPI.COMM_WORLD.Rank();
        world_size = MPI.COMM_WORLD.Size();

        /////////////////
        /// ALL PCS /////
        /////////////////

        // initialize data structures
        receiveBuffer = new int[1];
        sendBuffer = new int[1];
        row = new int[size];

        /////////////////
        /// MASTER //////
        /////////////////
        matrix = new int[size * size];
        if (rank == 0)
        {
            // Initialize matrix
            for (int idx = 0; idx < size * size; idx++)
                matrix[idx] = 0;
            matrix[0] = 1;
            receiveBuffer[0] = 0;
        }

        /////////////////
        /// PROGRAM /////
        /////////////////
        // distribute the rows of the matrix to the appropriate processes
        int startOfRow = rank * size;
        MPI.COMM_WORLD.Scatter(matrix, startOfRow, size, MPI.INT, row, 0, size, MPI.INT, 0);

        // For each column each process will calculate it's new values.
        for (int col_idx = 0; col_idx < size; col_idx++)
        {
            // Get Y from row above us (rank - 1).
            if (rank > 0)
                MPI.COMM_WORLD.Recv(receiveBuffer, 0, 1, MPI.INT, rank - 1, 0);
            // Get the X value (left from current column).
            int x = col_idx == 0 ? 0 : row[col_idx - 1];

            // Assign the new Z value.
            row[col_idx] = row[col_idx] + x + receiveBuffer[0];

            // Wait for other process to ask us for this value.
            sendBuffer[0] = row[col_idx];
            if (rank + 1 < size)
                MPI.COMM_WORLD.Send(sendBuffer, 0, 1, MPI.INT, rank + 1, 0);
        }

        // At this point each process should be done so we call gather.
        MPI.COMM_WORLD.Gather(row, 0, size, MPI.INT, matrix, startOfRow, size, MPI.INT, 0);

        // Let the master show the result.
        if (rank == 0)
            for (int row_idx = 0; row_idx < size; ++row_idx)
            {
                for (int col_idx = 0; col_idx < size; ++col_idx)
                    System.out.print(matrix[size * row_idx + col_idx] + " ");
                System.out.println();
            }

        MPI.Finalize(); // Don't forget!!
    }
}

0 个答案:

没有答案