cassandra节点因oom错误而放下

时间:2019-01-26 20:00:06

标签: cassandra jvm out-of-memory

Cassandra节点由于OOM而关闭,并检查了我在下面看到的/ var / log / message。

Jan 23 20:07:17 ip-xxx-xxx-xxx-xxx kernel: java invoked oom-killer: gfp_mask=0x280da, order=0, oom_score_adj=0
Jan 23 20:07:17 ip-xxx-xxx-xxx-xxx kernel: java cpuset=/ mems_allowed=0
....
Jan 23 20:07:17 ip-xxx-xxx-xxx-xxx kernel: Node 0 DMA: 1*4kB (U) 0*8kB 0*16kB 1*32kB (U) 2*64kB (U) 1*128kB (U) 1*256kB (U) 0*512kB 1*1024kB (U) 1*2048kB (M) 3*4096kB (M) = 15908kB
Jan 23 20:07:17 ip-xxx-xxx-xxx-xxx kernel: Node 0 DMA32: 1294*4kB (UM) 932*8kB (UEM) 897*16kB (UEM) 483*32kB (UEM) 224*64kB (UEM) 114*128kB (UEM) 41*256kB (UEM) 12*512kB (UEM) 7*1024kB (UE
M) 2*2048kB (EM) 35*4096kB (UM) = 242632kB
Jan 23 20:07:17 ip-xxx-xxx-xxx-xxx kernel: Node 0 Normal: 5319*4kB (UE) 3233*8kB (UEM) 960*16kB (UE) 0*32kB 0*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 62500kB
Jan 23 20:07:17 ip-xxx-xxx-xxx-xxx kernel: Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB
Jan 23 20:07:17 ip-xxx-xxx-xxx-xxx kernel: Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
Jan 23 20:07:17 ip-xxx-xxx-xxx-xxx kernel: 38109 total pagecache pages
Jan 23 20:07:17 ip-xxx-xxx-xxx-xxx kernel: 0 pages in swap cache
Jan 23 20:07:17 ip-xxx-xxx-xxx-xxx kernel: Swap cache stats: add 0, delete 0, find 0/0
Jan 23 20:07:17 ip-xxx-xxx-xxx-xxx kernel: Free swap  = 0kB
Jan 23 20:07:17 ip-xxx-xxx-xxx-xxx kernel: Total swap = 0kB
Jan 23 20:07:17 ip-xxx-xxx-xxx-xxx kernel: 16394647 pages RAM
Jan 23 20:07:17 ip-xxx-xxx-xxx-xxx kernel: 0 pages HighMem/MovableOnly
Jan 23 20:07:17 ip-xxx-xxx-xxx-xxx kernel: 310559 pages reserved
Jan 23 20:07:17 ip-xxx-xxx-xxx-xxx kernel: [ pid ]   uid  tgid total_vm      rss nr_ptes swapents oom_score_adj name
Jan 23 20:07:17 ip-xxx-xxx-xxx-xxx kernel: [ 2634]     0  2634    41614      326      82        0             0 systemd-journal
Jan 23 20:07:17 ip-xxx-xxx-xxx-xxx kernel: [ 2690]     0  2690    29793      541      27        0             0 lvmetad
Jan 23 20:07:17 ip-xxx-xxx-xxx-xxx kernel: [ 2710]     0  2710    11892      762      25        0         -1000 systemd-udevd
.....
Jan 23 20:07:17 ip-xxx-xxx-xxx-xxx kernel: [13774]     0 13774   459778    97729     429        0             0 Scan Factory
Jan 23 20:07:17 ip-xxx-xxx-xxx-xxx kernel: [14506]     0 14506    21628     5340      24        0             0 macompatsvc
Jan 23 20:07:17 ip-xxx-xxx-xxx-xxx kernel: [14586]     0 14586    21628     5340      24        0             0 macompatsvc
Jan 23 20:07:17 ip-xxx-xxx-xxx-xxx kernel: [14588]     0 14588    21628     5340      24        0             0 macompatsvc
Jan 23 20:07:17 ip-xxx-xxx-xxx-xxx kernel: [14589]     0 14589    21628     5340      24        0             0 macompatsvc
Jan 23 20:07:17 ip-xxx-xxx-xxx-xxx kernel: [14598]     0 14598    21628     5340      24        0             0 macompatsvc
Jan 23 20:07:17 ip-xxx-xxx-xxx-xxx kernel: [14599]     0 14599    21628     5340      24        0             0 macompatsvc
Jan 23 20:07:17 ip-xxx-xxx-xxx-xxx kernel: [14600]     0 14600    21628     5340      24        0             0 macompatsvc
Jan 23 20:07:17 ip-xxx-xxx-xxx-xxx kernel: [14601]     0 14601    21628     5340      24        0             0 macompatsvc
Jan 23 20:07:17 ip-xxx-xxx-xxx-xxx kernel: [19679]     0 19679    21628     5340      24        0             0 macompatsvc
Jan 23 20:07:17 ip-xxx-xxx-xxx-xxx kernel: [19680]     0 19680    21628     5340      24        0             0 macompatsvc
Jan 23 20:07:17 ip-xxx-xxx-xxx-xxx kernel: [ 9084]  1007  9084  2822449   260291     810        0             0 java
Jan 23 20:07:17 ip-xxx-xxx-xxx-xxx kernel: [ 8509]  1007  8509 17223585 14908485   32510        0             0 java
Jan 23 20:07:17 ip-xxx-xxx-xxx-xxx kernel: [21877]     0 21877   461828    97716     318        0             0 ScanAction Mgr
Jan 23 20:07:17 ip-xxx-xxx-xxx-xxx kernel: [21884]     0 21884   496653    98605     340        0             0 OAS Manager
Jan 23 20:07:17 ip-xxx-xxx-xxx-xxx kernel: [31718]    89 31718    25474      486      48        0             0 pickup
Jan 23 20:07:17 ip-xxx-xxx-xxx-xxx kernel: [ 4891]  1007  4891    26999      191       9        0             0 iostat
Jan 23 20:07:17 ip-xxx-xxx-xxx-xxx kernel: [ 4957]  1007  4957    26999      192      10        0             0 iostat
Jan 23 20:07:17 ip-xxx-xxx-xxx-xxx kernel: Out of memory: Kill process 8509 (java) score 928 or sacrifice child
Jan 23 20:07:17 ip-xxx-xxx-xxx-xxx kernel: Killed process 8509 (java) total-vm:68894340kB, anon-rss:59496344kB, file-rss:137596kB, shmem-rss:0kB

除了带有搜索和监视代理程序的dse cassandra之外,此主机上没有其他任何运行。最大堆大小设置为31g,错误时cassandra java进程似乎正在使用〜57gb(内存为62gb)。 因此,我猜jvm开始使用大量内存并触发了oom错误。 我的理解正确吗? 难道这是linux触发的jvm杀死,因为jvm消耗的内存超过可用内存?

因此,在这种情况下,jvm使用的内存最大为31g,而剩余的26gb使用的是非堆内存。通常,此过程大约需要42g,而事实上,在omoom时刻它消耗了57g,我怀疑Java进程是罪魁祸首,而不是受害者。

在发布时,没有进行堆转储,我现在已对其进行配置。但是,即使进行了堆转储,它也会有助于找出谁正在消耗更多的内存。 Heapdump只会转储堆内存区域,非转储应使用什么转储?本机内存跟踪是我遇到的一件事。 发生oom时有什么方法可以转储本机内存? 监视jvm内存以诊断oom错误的最佳方法是什么?

1 个答案:

答案 0 :(得分:1)

这可能无济于事。

由于oom-killer是内核功能,因此您可能不会获得heapdump。 Jvm没有机会编写heapdump。 并且SIGKILL不能被捕获,也不会生成核心转储。 (unix默认操作)

http://programmergamer.blogspot.com/2013/05/clarification-on-sigint-sigterm-sigkill.html