0MQ:在将上下文传递给线程池时,抛出了Segmentation错误

时间:2017-05-25 08:07:02

标签: c++ zeromq

我正在构建一个小程序(here),其中main()向所有工作线程发送相同的msg,工作线程只打印msg

我想我已经完全遵循了官方指南中的this教程。

我通过void指针将0MQ context传递给线程,然后将其强制转换回zmq::context_t *。但我仍然遇到了Segmentation故障。以下是GDB中coredump的信息:

Core was generated by `./test 1'.
Program terminated with signal SIGSEGV, Segmentation fault.
#0  zmq::ctx_t::check_tag (this=this@entry=0xabadcafe) at src/ctx.cpp:89
89  src/ctx.cpp: No such file or directory.
[Current thread is 1 (Thread 0x7fcdaae3f700 (LWP 1428))]
(gdb) bt
#0  zmq::ctx_t::check_tag (this=this@entry=0xabadcafe) at src/ctx.cpp:89
#1  0x00007fcdad17c275 in zmq_socket (ctx_=0xabadcafe, type_=7) at src/zmq.cpp:245
#2  0x0000000000402da1 in zmq::socket_t::init (this=0x7fcdaae3edf0, context_=..., type_=7) at /usr/include/zmq.hpp:649
#3  0x0000000000402ac3 in zmq::socket_t::socket_t (this=0x7fcdaae3edf0, context_=..., type_=7) at /usr/include/zmq.hpp:463
#4  0x0000000000401f18 in task1 (arg=0x170ac20) at test.cpp:21
#5  0x00000000004050fe in std::_Bind_simple<void (*(void*))(void*)>::_M_invoke<0ul>(std::_Index_tuple<0ul>) (this=0x170e0d8)
    at /usr/include/c++/5/functional:1531
#6  0x0000000000405008 in std::_Bind_simple<void (*(void*))(void*)>::operator()() (this=0x170e0d8) at /usr/include/c++/5/functional:1520
#7  0x0000000000404f98 in std::thread::_Impl<std::_Bind_simple<void (*(void*))(void*)> >::_M_run() (this=0x170e0c0) at /usr/include/c++/5/thread:115
#8  0x00007fcdacc48c80 in ?? () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6
#9  0x00007fcdacf196ba in start_thread (arg=0x7fcdaae3f700) at pthread_create.c:333
#10 0x00007fcdac6b782d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:109
(gdb) quit

使用g++ -std=c++11 test.cpp -o test -lzmq -lpthread -g编译的程序。

2 个答案:

答案 0 :(得分:1)

在此用例中,无线程Context()可以提供帮助:

Native API推荐这种做法用于纯 inproc:// 基于传输类的多线程用法:

  

zmq_init() 函数初始化ØMQ上下文。

     

io_threads参数指定用于处理I / O操作的ØMQ线程池的大小。如果您的应用程序仅使用inproc传输进行消息传递,则可以将其设置为零,否则将其设置为至少一个。

     

线程安全

     

ØMQ上下文是线程安全的,可以根据需要在尽可能多的应用程序线程之间共享,而不需要调用者的任何额外锁定。

     

zmq_ctx_new(3)弃用了此函数。

更新的{API / c ++包装器}将此步骤(最初在{v2.1.11 API / c ++包装器中通过Context( NUM_io_threads )在实例化时可用)移动到{v4.2.2}拆分进程中通过设置调用实现后实例化(但是套接字前关联)状态 zmq_ctx_set( *ctx, ZMQ_IO_THREADS, 0 )

int main( int argc, char* argv[] )
{   
    int worker_num;
    if (  argc != 2 ) { cout << "1 parameter pls" << endl;                           exit(1); } // EXIT[1]
    try
    {
      if ( !has_only_digits( string( argv[1] ) ) ) { cout << "digit pls" << endl;    exit(1); } // EXIT[2]
      worker_num =     stoi( string( argv[1] ) );
    }
    catch ( exception e ) { cout << "exception while processing parameters" << endl; exit(1); } // EXIT[3]
 // _______________________________________  // ZeroMQ inproc use-case W/O IO-thread
    zmq::context_t context( 0 );             // 
 // _______________________________________  // 

    zmq::socket_t  distask_socket( context, ZMQ_PUSH );
                   distask_socket.bind( "inproc://task_publisher" );

    vector<thread> pool;

    for ( int i = 0; i < worker_num; i++ )
    {
      cout << "main() : creating thread, " << i << endl;
      pool.push_back( thread(           task1,
                              (void *) &context
                              )
                      );
    }

    for ( int i = 0; i < worker_num; i++ )
    {
      zmq::message_t msg( 6 );
      memcpy( (void *) msg.data(), "World", 6 ); 
      distask_socket.send( msg );
    }

    for ( auto &t : pool )
      t.join();

    exit(0);
}

值得一提的是,挤压性能的最后一点&amp;削减延迟来源。

答案 1 :(得分:0)

pool.push_back(thread(task1, (void *)context));

应该是

pool.push_back(thread(task1, (void *)&context));

如果您想传递context

的地址