我遇到大问题的问题,我希望依赖于postgresql.config的错误配置。我的设置是Ubuntu 17.10上的PostgreSQL 9.6,带有32GB RAM和3TB硬盘。查询正在运行pgr_dijkstraCost
以在25.000个链接的网络中创建~10,000点的OD-Matrix。因此预期得到的表非常大(~100&#39,000&#39,000行,其中列为from,to,cost)。但是,创建简单测试select x,1 as c2,2 as c3
from generate_series(1,90000000)
成功。
查询计划:
QUERY PLAN
--------------------------------------------------------------------------------------
Function Scan on pgr_dijkstracost (cost=393.90..403.90 rows=1000 width=24)
InitPlan 1 (returns $0)
-> Aggregate (cost=196.82..196.83 rows=1 width=32)
-> Seq Scan on building_nodes b (cost=0.00..166.85 rows=11985 width=4)
InitPlan 2 (returns $1)
-> Aggregate (cost=196.82..196.83 rows=1 width=32)
-> Seq Scan on building_nodes b_1 (cost=0.00..166.85 rows=11985 width=4)
这会导致PostgreSQL崩溃:
WARNING: terminating connection because of crash of another server process
DETAIL: The postmaster has commanded this server process to roll back the
current transaction and exit, because another server process exited
normally and possibly corrupted shared memory.
运行dmesg
我可以将其追踪为内存不足问题:
Out of memory: Kill process 5630 (postgres) score 949 or sacrifice child
[ 5322.821084] Killed process 5630 (postgres) total-vm:36365660kB,anon-rss:32344260kB, file-rss:0kB, shmem-rss:0kB
[ 5323.615761] oom_reaper: reaped process 5630 (postgres), now anon-rss:0kB,file-rss:0kB, shmem-rss:0kB
[11741.155949] postgres invoked oom-killer: gfp_mask=0x14201ca(GFP_HIGHUSER_MOVABLE|__GFP_COLD), nodemask=(null), order=0, oom_score_adj=0
[11741.155953] postgres cpuset=/ mems_allowed=0
运行查询时,我也可以通过top
观察到我的RAM在崩溃前降至0。崩溃前提交的内存量:
$grep Commit /proc/meminfo
CommitLimit: 18574304 kB
Committed_AS: 42114856 kB
当RAM不够时,我希望HDD用于写入/缓冲临时数据。但是我的硬盘上的可用空间在处理过程中没有变化。所以我开始挖掘缺少的配置(由于我的重新定位的数据目录而导致的问题)以及跟随不同的网站:
https://www.postgresql.org/docs/current/static/kernel-resources.html#LINUX-MEMORY-OVERCOMMIT https://www.credativ.com/credativ-blog/2010/03/postgresql-and-linux-memory-management
我对postgresql.conf的原始设置是默认的,除了data-directory中的更改:
data_directory = '/hdd_data/postgresql/9.6/main'
shared_buffers = 128MB # min 128kB
#huge_pages = try # on, off, or try
#temp_buffers = 8MB # min 800kB
#max_prepared_transactions = 0 # zero disables the feature
#work_mem = 4MB # min 64kB
#maintenance_work_mem = 64MB # min 1MB
#replacement_sort_tuples = 150000 # limits use of replacement selection sort
#autovacuum_work_mem = -1 # min 1MB, or -1 to use maintenance_work_mem
#max_stack_depth = 2MB # min 100kB
dynamic_shared_memory_type = posix # the default is the first option
我更改了配置:
shared_buffers = 128MB
work_mem = 40MB # min 64kB
maintenance_work_mem = 64MB
与sudo service postgresql reload
重新联系并测试了相同的查询,但未发现行为有任何变化。这只是意味着,这么大的查询无法完成吗?任何帮助表示赞赏。