RabbitMQ(beam.smp)和高CPU /内存负载问题

时间:2014-08-06 14:03:50

标签: erlang debian rabbitmq celery mnesia

我有一个debian盒用芹菜和rabbitmq运行任务大约一年。最近我注意到任务没有被处理,所以我登录系统,发现芹菜无法连接到rabbitmq。我重新启动了rabbitmq-server,尽管芹菜不再抱怨它现在没有执行新的任务。奇怪的是,rabbitmq正在疯狂地吞噬cpu和内存资源。重新启动服务器无法解决问题。花了几个小时在网上寻找解决方案无济于事后,我决定重建服务器。

我使用Debian 7.5,rabbitmq 2.8.4,celery 3.1.13(Cipater)重建了新服务器。大约一个小时左右,一切都工作得很好,直到芹菜开始再次抱怨它无法连接到rabbitmq!

[2014-08-06 05:17:21,036: ERROR/MainProcess] consumer: Cannot connect to amqp://guest:**@127.0.0.1:5672//: [Errno 111] Connection refused.
Trying again in 6.00 seconds...

我重新启动了rabbitmq service rabbitmq-server start并获得同样的问题:

rabbitmq再次开始膨胀,不断冲击cpu并慢慢接管所有ram并交换:

PID USER      PR  NI  VIRT  RES  SHR S  %CPU %MEM    TIME+  COMMAND
21823 rabbitmq  20   0  908m 488m 3900 S 731.2 49.4   9:44.74 beam.smp

这是rabbitmqctl status上的结果:

Status of node 'rabbit@li370-61' ...
[{pid,21823},
 {running_applications,[{rabbit,"RabbitMQ","2.8.4"},
                        {os_mon,"CPO  CXC 138 46","2.2.9"},
                        {sasl,"SASL  CXC 138 11","2.2.1"},
                        {mnesia,"MNESIA  CXC 138 12","4.7"},
                        {stdlib,"ERTS  CXC 138 10","1.18.1"},
                        {kernel,"ERTS  CXC 138 10","2.15.1"}]},
 {os,{unix,linux}},
 {erlang_version,"Erlang R15B01 (erts-5.9.1) [source] [64-bit] [smp:8:8] [async-threads:30] [kernel-poll:true]\n"},
 {memory,[{total,489341272},
          {processes,462841967},
          {processes_used,462685207},
          {system,26499305},
          {atom,504409},
          {atom_used,473810},
          {binary,98752},
          {code,11874771},
          {ets,6695040}]},
 {vm_memory_high_watermark,0.3999999992280962},
 {vm_memory_limit,414559436},
 {disk_free_limit,1000000000},
 {disk_free,48346546176},
 {file_descriptors,[{total_limit,924},
                    {total_used,924},
                    {sockets_limit,829},
                    {sockets_used,3}]},
 {processes,[{limit,1048576},{used,1354}]},
 {run_queue,0},

来自/ var / log / rabbitmq的一些条目:

=WARNING REPORT==== 8-Aug-2014::00:11:35 ===
Mnesia('rabbit@li370-61'): ** WARNING ** Mnesia is overloaded: {dump_log,
                                                                write_threshold}

=WARNING REPORT==== 8-Aug-2014::00:11:35 ===
Mnesia('rabbit@li370-61'): ** WARNING ** Mnesia is overloaded: {dump_log,
                                                                write_threshold}

=WARNING REPORT==== 8-Aug-2014::00:11:35 ===
Mnesia('rabbit@li370-61'): ** WARNING ** Mnesia is overloaded: {dump_log,
                                                                write_threshold}

=WARNING REPORT==== 8-Aug-2014::00:11:35 ===
Mnesia('rabbit@li370-61'): ** WARNING ** Mnesia is overloaded: {dump_log,
                                                                write_threshold}

=WARNING REPORT==== 8-Aug-2014::00:11:36 ===
Mnesia('rabbit@li370-61'): ** WARNING ** Mnesia is overloaded: {dump_log,
                                                                write_threshold}

=INFO REPORT==== 8-Aug-2014::00:11:36 ===
vm_memory_high_watermark set. Memory used:422283840 allowed:414559436

=WARNING REPORT==== 8-Aug-2014::00:11:36 ===
memory resource limit alarm set on node 'rabbit@li370-61'.

**********************************************************
*** Publishers will be blocked until this alarm clears ***
**********************************************************

=INFO REPORT==== 8-Aug-2014::00:11:43 ===
started TCP Listener on [::]:5672

=INFO REPORT==== 8-Aug-2014::00:11:44 ===
vm_memory_high_watermark clear. Memory used:290424384 allowed:414559436

=WARNING REPORT==== 8-Aug-2014::00:11:44 ===
memory resource limit alarm cleared on node 'rabbit@li370-61'

=INFO REPORT==== 8-Aug-2014::00:11:59 ===
vm_memory_high_watermark set. Memory used:414584504 allowed:414559436

=WARNING REPORT==== 8-Aug-2014::00:11:59 ===
memory resource limit alarm set on node 'rabbit@li370-61'.

**********************************************************
*** Publishers will be blocked until this alarm clears ***
**********************************************************

=INFO REPORT==== 8-Aug-2014::00:12:00 ===
vm_memory_high_watermark clear. Memory used:411143496 allowed:414559436

=WARNING REPORT==== 8-Aug-2014::00:12:00 ===
memory resource limit alarm cleared on node 'rabbit@li370-61'

=INFO REPORT==== 8-Aug-2014::00:12:01 ===
vm_memory_high_watermark set. Memory used:415563120 allowed:414559436

=WARNING REPORT==== 8-Aug-2014::00:12:01 ===
memory resource limit alarm set on node 'rabbit@li370-61'.

**********************************************************
*** Publishers will be blocked until this alarm clears ***
**********************************************************

=INFO REPORT==== 8-Aug-2014::00:12:07 ===
Server startup complete; 0 plugins started.

=ERROR REPORT==== 8-Aug-2014::00:15:32 ===
** Generic server rabbit_disk_monitor terminating 
** Last message in was update
** When Server state == {state,"/var/lib/rabbitmq/mnesia/rabbit@li370-61",
                               50000000,46946492416,100,10000,
                               #Ref<0.0.1.79456>,false}
** Reason for termination == 
** {unparseable,[]}

=INFO REPORT==== 8-Aug-2014::00:15:37 ===
Disk free limit set to 50MB

=ERROR REPORT==== 8-Aug-2014::00:16:03 ===
** Generic server rabbit_disk_monitor terminating 
** Last message in was update
** When Server state == {state,"/var/lib/rabbitmq/mnesia/rabbit@li370-61",
                               50000000,46946426880,100,10000,
                               #Ref<0.0.1.80930>,false}
** Reason for termination == 
** {unparseable,[]}

=INFO REPORT==== 8-Aug-2014::00:16:05 ===
Disk free limit set to 50MB

更新 从rabbitmq.com存储库安装最新版本的rabbitmq(3.3.4-1)时似乎解决了问题。最初我从Debian存储库安装了一个(2.8.4)。到目前为止,rabbitmq-server工作正常。如果问题回来,我会更新这篇文章。

更新 不幸的是,在大约24小时之后,问题再次出现,其中rabbitmq关闭并重新启动进程将使其消耗资源,直到它在几分钟内再次关闭。

6 个答案:

答案 0 :(得分:40)

最后我找到了解决方案。这些帖子有助于解决这个问题。 RabbitMQ on EC2 Consuming Tons of CPUhttps://serverfault.com/questions/337982/how-do-i-restart-rabbitmq-after-switching-machines

发生了什么事,兔子正在坚持所有从未被释放到结果超负荷的结果。我清除了/var/lib/rabbitmq/mnesia/rabbit/中的所有陈旧数据,重新启动了兔子,现在工作正常。

我的解决方案是禁止在Celery配置文件中将结果与CELERY_IGNORE_RESULT = True一起存储,以确保不再发生这种情况。

答案 1 :(得分:6)

您也可以重置队列:

sudo service rabbitmq-server start
sudo rabbitmqctl stop_app
sudo rabbitmqctl reset
sudo rabbitmqctl start_app

如果系统没有响应,您可能需要在重新启动后立即运行这些命令。

答案 2 :(得分:4)

由于芹菜,你正在耗尽内存资源,我遇到了类似的问题,这是芹菜后端结果使用的队列的问题。

您可以使用rabbitmqctl list_queues命令检查有多少个队列,如果该数字增加为前夕,请注意。在这种情况下,请检查您的芹菜使用情况。

关于芹菜,如果你没有得到asycronous事件的结果,不要配置后端存储那些未使用的结果。

答案 3 :(得分:1)

我遇到了类似的问题,结果是由于一些流氓RabbitMQ客户端应用程序。 问题似乎是由于一些未处理的错误,流氓应用程序不断尝试建立与RabbitMQ代理的连接。 一旦客户端应用程序重新启动,一切都恢复正常(因为应用程序停止了故障,并且停止尝试在无限循环中建立与RabbitMQ的连接)

答案 4 :(得分:1)

另一个可能的原因:管理插件。

我正在运行启用了管理插件的RabbitMQ 3.8.1。 在10核服务器上,我有高达1000%的CPU使用率,其中有3个空闲的使用者,没有发送任何消息,只有一个队列。

当我通过执行rabbitmq-plugins disable rabbitmq_management禁用管理插件时,使用率下降到0%,偶尔会出现200%的峰值。

答案 5 :(得分:0)

多年使用后,我开始遇到一些问题-高CPU,beam.smp位于顶部输出的顶部。 -在最新升级到ubuntu托管的Rabbitmq经纪人之后。

禁用该插件具有立竿见影的效果。谢谢安蒂