2G是Linux上coredump文件的限制大小吗?

时间:2017-04-11 09:28:54

标签: linux gdb core archlinux coredump

我的操作系统是Arch Linux。当有coredump时,我尝试使用gdb来调试它:

$ coredumpctl gdb 1621
......
       Storage: /var/lib/systemd/coredump/core.runTests.1014.b43166f4bba84bcba55e65ae9460beff.1621.1491901119000000000000.lz4
       Message: Process 1621 (runTests) of user 1014 dumped core.

                Stack trace of thread 1621:
                #0  0x00007ff1c0fcfa10 n/a (n/a)

GNU gdb (GDB) 7.12.1
......
Reading symbols from /home/xiaonan/Project/privDB/build/bin/runTests...done.
BFD: Warning: /var/tmp/coredump-28KzRc is truncated: expected core file size >= 2179375104, found: 2147483648.

我查看了/var/tmp/coredump-28KzRc文件:

$ ls -alth /var/tmp/coredump-28KzRc
-rw------- 1 xiaonan xiaonan 2.0G Apr 11 17:00 /var/tmp/coredump-28KzRc

Linux上的coredump文件的限制大小是2G吗?因为我认为我的/var/tmp有足够的磁盘空间可供使用:

$ df -h
Filesystem      Size  Used Avail Use% Mounted on
dev              32G     0   32G   0% /dev
run              32G  3.1M   32G   1% /run
/dev/sda2       229G   86G  132G  40% /
tmpfs            32G  708M   31G   3% /dev/shm
tmpfs            32G     0   32G   0% /sys/fs/cgroup
tmpfs            32G  957M   31G   3% /tmp
/dev/sda1       511M   33M  479M   7% /boot
/dev/sda3       651G  478G  141G  78% /home

P.S。 “ulimit -a”输出:

$ ulimit -a
core file size          (blocks, -c) unlimited
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 257039
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 257039
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

更新:/etc/systemd/coredump.conf文件:

$ cat coredump.conf
#  This file is part of systemd.
#
#  systemd is free software; you can redistribute it and/or modify it
#  under the terms of the GNU Lesser General Public License as published by
#  the Free Software Foundation; either version 2.1 of the License, or
#  (at your option) any later version.
#
# Entries in this file show the compile time defaults.
# You can change settings by editing this file.
# Defaults can be restored by simply deleting this file.
#
# See coredump.conf(5) for details.

[Coredump]
#Storage=external
#Compress=yes
#ProcessSizeMax=2G
#ExternalSizeMax=2G
#JournalSizeMax=767M
#MaxUse=
#KeepFree=

2 个答案:

答案 0 :(得分:3)

@牛米。是对的。
(1)修改/etc/systemd/coredump.conf文件:

[Coredump]
ProcessSizeMax=8G
ExternalSizeMax=8G
JournalSizeMax=8G

(2)重新加载systemd的配置:

# systemctl daemon-reload

请注意,这只会对新生成的核心转储文件生效。

答案 1 :(得分:2)

  

2G是Linux上coredump文件的限制大小吗?

没有。我经常处理大于4GiB的核心转储。

  

ulimit -a
  core file size (blocks, -c) unlimited

这会告诉您此shell中的当前限制。它告诉你 nothing 关于runTests运行的环境。该过程可能会通过setrlimit(2)设置自己的限制,或者其父级可能会为其设置限制。

您可以修改runTest以使用getrlimit(2)打印其当前限制,并查看该流程运行时的实际情况。

P.S。仅仅因为core被截断并不意味着它完全没用(尽管经常是这样)。至少,您应该尝试使用GDB where命令。