mpi + infiniband连接太多了

时间:2014-10-26 18:19:48

标签: mpi infiniband

我正在群集上运行MPI应用程序,使用4个节点,每个节点有64个核心。 应用程序执行所有通信模式。

通过以下方式执行应用程序运行正常:

$:mpirun -npernode 36 ./Application

每个节点添加一个进程让应用程序崩溃:

$:mpirun -npernode 37 ./Application

--------------------------------------------------------------------------
A process failed to create a queue pair. This usually means either
the device has run out of queue pairs (too many connections) or
there are insufficient resources available to allocate a queue pair
(out of memory). The latter can happen if either 1) insufficient
memory is available, or 2) no more physical memory can be registered
with the device.

For more information on memory registration see the Open MPI FAQs at:
http://www.open-mpi.org/faq/?category=openfabrics#ib-locked-pages

Local host:             laser045
Local device:           qib0
Queue pair type:        Reliable connected (RC)
--------------------------------------------------------------------------
[laser045:15359] *** An error occurred in MPI_Issend
[laser045:15359] *** on communicator MPI_COMM_WORLD
[laser045:15359] *** MPI_ERR_OTHER: known error not in list
[laser045:15359] *** MPI_ERRORS_ARE_FATAL: your MPI job will now abort
[laser040:49950] [[53382,0],0]->[[53382,1],30] mca_oob_tcp_msg_send_handler: writev failed: Connection reset by peer (104) [sd = 163]
[laser040:49950] [[53382,0],0]->[[53382,1],21] mca_oob_tcp_msg_send_handler: writev failed: Connection reset by peer (104) [sd = 154]
--------------------------------------------------------------------------
mpirun has exited due to process rank 128 with PID 15358 on
node laser045 exiting improperly. There are two reasons this could occur:

1. this process did not call "init" before exiting, but others in
the job did. This can cause a job to hang indefinitely while it waits
for all processes to call "init". By rule, if one process calls "init",
then ALL processes must call "init" prior to termination.

2. this process called "init", but exited without calling "finalize".
By rule, all processes that call "init" MUST call "finalize" prior to
exiting or it will be considered an "abnormal termination"

This may have caused other processes in the application to be
terminated by signals sent by mpirun (as reported here).
--------------------------------------------------------------------------
[laser040:49950] 4 more processes have sent help message help-mpi-btl-openib-cpc-base.txt / ibv_create_qp failed
[laser040:49950] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages
[laser040:49950] 4 more processes have sent help message help-mpi-errors.txt / mpi_errors_are_fatal

编辑为所有通信模式添加了一些所有源代码:

// Send data to all other ranks
for(unsigned i = 0; i < (unsigned)size; ++i){
    if((unsigned)rank == i){
        continue;
    }

    MPI_Request request;
    MPI_Issend(&data, dataSize, MPI_DOUBLE, i, 0, MPI_COMM_WORLD, &request);
    requests.push_back(request);
}

// Recv data from all other ranks
for(unsigned i = 0; i < (unsigned)size; ++i){
    if((unsigned)rank == i){
       continue;
    }

    MPI_Status status;
    MPI_Recv(&recvData, recvDataSize, MPI_DOUBLE, i, 0, MPI_COMM_WORLD, &status);
}

// Finish communication operations
for(MPI_Request &r: requests){
    MPI_Status status;
    MPI_Wait(&r, &status);
}

我可以作为群集用户做什么或者我可以给群集管理员提供一些建议吗?

2 个答案:

答案 0 :(得分:2)

错误与缓冲区大小有关 这里评论的mpi消息队列:

http://www.open-mpi.org/faq/?category=openfabrics#ib-xrc

以下环境设置解决了我的问题:

$ export OMPI_MCA_btl_openib_receive_queues =&#34; P,128,256,192,128:S,65536,256,192,128&#34;

答案 1 :(得分:2)

行mca_oob_tcp_msg_send_handler错误行可能表示对应于接收等级的节点已死(内存不足或收到SIGSEGV):

http://www.open-mpi.org/faq/?category=tcp#tcp-connection-errors

Open-MPI中的OOB(带外)框架用于控制消息,而不是用于应用程序的消息。实际上,消息通常通过字节传输层(BTL),如self,sm,vader,openib(Infiniband)等。

'ompi_info -a'的输出在这方面很有用。

最后,问题中没有指明Infiniband硬件供应商是Mellanox,因此XRC选项可能无效(例如,Intel / QLogic Infiniband不支持此选项)。