Cassandra错误消息:由于本地暂停而未将节点标记为关闭。为什么?

时间:2016-09-26 09:28:14

标签: apache-spark amazon-ec2 cassandra datastax datastax-startup

我有6个节点,1个Solr,5个Spark节点,使用数据传输。我的群集位于与Amazon EC2类似的服务器上,具有EBS卷。每个节点有3个EBS卷,使用LVM组成逻辑数据磁盘。在我的OPS中心,同一节点经常无响应,这导致我的数据系统连接超时。我的数据量约为400GB,包含3个副本。我有20个流媒体作业,每分钟有一个批处理间隔。这是我的错误消息:

/var/log/cassandra/output.log:WARN 13:44:31,868 Not marking nodes down due to local pause of 53690474502 > 5000000000
/var/log/cassandra/system.log:WARN [GossipTasks:1] 2016-09-25 16:40:34,944 FailureDetector.java:258 - Not marking nodes down due to local pause of 64532052919 > 5000000000 
/var/log/cassandra/system.log:WARN [GossipTasks:1] 2016-09-25 16:59:12,023 FailureDetector.java:258 - Not marking nodes down due to local pause of 66027485893 > 5000000000 
/var/log/cassandra/system.log:WARN [GossipTasks:1] 2016-09-26 13:44:31,868 FailureDetector.java:258 - Not marking nodes down due to local pause of 53690474502 > 5000000000

编辑:

这些是我更具体的配置。我想知道我做错了什么,如果是的话,我怎么能详细了解它是什么以及如何解决它?

out heap设置为

MAX_HEAP_SIZE="16G"
HEAP_NEWSIZE="4G"

当前堆:

[root@iZ11xsiompxZ ~]# jstat -gc 11399
 S0C    S1C    S0U    S1U      EC       EU        OC         OU       MC     MU    CCSC   CCSU   YGC     YGCT    FGC    FGCT     GCT
 0.0   196608.0  0.0   196608.0 6717440.0 2015232.0 43417600.0 23029174.0 69604.0 68678.2  0.0    0.0     1041  131.437   0      0.000  131.437
[root@iZ11xsiompxZ ~]# jmap -heap 11399
Attaching to process ID 11399, please wait...
Debugger attached successfully.
Server compiler detected.
JVM version is 25.102-b14

using thread-local object allocation.
Garbage-First (G1) GC with 23 thread(s)

堆配置:

MinHeapFreeRatio         = 40
   MaxHeapFreeRatio         = 70
   MaxHeapSize              = 51539607552 (49152.0MB)
   NewSize                  = 1363144 (1.2999954223632812MB)
   MaxNewSize               = 30920409088 (29488.0MB)
   OldSize                  = 5452592 (5.1999969482421875MB)
   NewRatio                 = 2
   SurvivorRatio            = 8
   MetaspaceSize            = 21807104 (20.796875MB)
   CompressedClassSpaceSize = 1073741824 (1024.0MB)
   MaxMetaspaceSize         = 17592186044415 MB
   G1HeapRegionSize         = 16777216 (16.0MB)

堆使用:

G1 Heap:
   regions  = 3072
   capacity = 51539607552 (49152.0MB)
   used     = 29923661848 (28537.427757263184MB)
   free     = 21615945704 (20614.572242736816MB)
   58.059545404588185% used
G1 Young Generation:
Eden Space:
   regions  = 366
   capacity = 6878658560 (6560.0MB)
   used     = 6140461056 (5856.0MB)
   free     = 738197504 (704.0MB)
   89.26829268292683% used
Survivor Space:
   regions  = 12
   capacity = 201326592 (192.0MB)
   used     = 201326592 (192.0MB)
   free     = 0 (0.0MB)
   100.0% used
G1 Old Generation:
   regions  = 1443
   capacity = 44459622400 (42400.0MB)
   used     = 23581874200 (22489.427757263184MB)
   free     = 20877748200 (19910.572242736816MB)
   53.04110320109241% used

40076 interned Strings occupying 7467880 bytes.

我不知道为什么会这样。非常感谢。

1 个答案:

答案 0 :(得分:2)

您看到的消息Not marking nodes down due to local pause是由于JVM暂停。虽然您通过发布JVM信息在这里做了一些好事,但通常一个好的起点只是查看/var/log/cassandra/system.log,例如检查ERRORWARN等内容。同时通过点击GCInspector来检查GC事件的长度和频率。

nodetool tpstats之类的工具在这里是你的朋友,看看你是否备份或删除了突变,阻止了刷新写作等。

这里的文档有一些值得检查的好东西:https://docs.datastax.com/en/landing_page/doc/landing_page/troubleshooting/cassandra/cassandraTrblTOC.html

同时检查您的节点是否具有推荐的生产设置,这是经常被忽视的:

http://docs.datastax.com/en/landing_page/doc/landing_page/recommendedSettingsLinux.html

还有一点需要注意的是,Cassandra非常敏感,而且#34;正常" EBS可能不够快,无法满足您的需求。也把Solr扔进混音中,当你遇到Cassandra压缩和Lucene Merge同时进入磁盘时你可以看到很多i / o争用。