在具有约200个节点(EC2实例)的presto群集(0.212)中,有时(例如每天一次),一些presto worker进程会神秘地重启(通常在发生的同一时间)。 EC2实例很好,内存百分比指标表明已使用70%内存。
presto工作者是否有任何自杀和重新启动逻辑(例如,如果连续> = M个连续错误,则重新启动)?还是在某些情况下presto协调器可以重新启动工作程序?还有什么可能同时杀死几个工作进程?
这是显示重新启动的服务器日志的示例。
2018-11-14T23:16:28.78011 2018-11-14T23:16:28.776Z INFO Thread-63 io.airlift.bootstrap.LifeCycleManager Life cycle stopping...
2018-11-14T23:16:29.17181 ThreadDump 4524
2018-11-14T23:16:29.17182 ForceSafepoint 414
2018-11-14T23:16:29.17182 Deoptimize 66
2018-11-14T23:16:29.17182 CollectForMetadataAllocation 11
2018-11-14T23:16:29.17182 CGC_Operation 272
2018-11-14T23:16:29.17182 G1IncCollectionPause 2900
2018-11-14T23:16:29.17183 EnableBiasedLocking 1
2018-11-14T23:16:29.17183 RevokeBias 6248
2018-11-14T23:16:29.17183 BulkRevokeBias 272
2018-11-14T23:16:29.17183 Exit 1
2018-11-14T23:16:29.17183 931 VM operations coalesced during safepoint
2018-11-14T23:16:29.17184 Maximum sync time 197 ms
2018-11-14T23:16:29.17184 Maximum vm operation time (except for Exit VM operation) 2599 ms
2018-11-14T23:16:29.52968 ./finish: line 37: kill: (3700) - No such process
2018-11-14T23:16:29.52969 ./finish: line 37: kill: (3702) - No such process
2018-11-14T23:16:31.53563 ./finish: line 40: kill: (3704) - No such process
2018-11-14T23:16:31.53564 ./finish: line 40: kill: (3706) - No such process
2018-11-14T23:16:32.25948 2018-11-14T23:16:32.257Z INFO main io.airlift.log.Logging Logging to stderr
2018-11-14T23:16:32.26034 2018-11-14T23:16:32.260Z INFO main Bootstrap Loading configuration
2018-11-14T23:16:32.33800 2018-11-14T23:16:32.337Z INFO main Bootstrap Initializing logging
......
2018-11-14T23:16:35.75427 2018-11-14T23:16:35.754Z INFO main io.airlift.bootstrap.LifeCycleManager Life cycle starting...
2018-11-14T23:16:35.75556 2018-11-14T23:16:35.755Z INFO main io.airlift.bootstrap.LifeCycleManager Life cycle startup complete. System ready.
如果相关,则日志中的这些“ ./finish:...”行与下面的/ etc / service / presto / finish文件相关。
1 #!/bin/bash
2 set -e
3 exec 2>&1
4 exec 3>>/var/log/runit/runit.log
5
6 STATSD_PREFIX="runit.presto"
7 source /etc/statsd/functions
8
9 function error_handler() {
10 echo "$(date +"%Y-%m-%dT%H:%M:%S.%3NZ") Error occurred in run file at line: $1."
11 echo "$(date +"%Y-%m-%dT%H:%M:%S.%3NZ") Line exited with status: $2"
12 incr "finish.error"
13 }
14 trap 'error_handler $LINENO $?' ERR
15 echo "$(date +"%Y-%m-%dT%H:%M:%S.%3NZ") process=presto status=stopped exitcode=$1 waitcode=$2" >&3
16 # treat non-zero exit codes as a crash
17 # waitcode contains the signal if there's one (ex. 11 - SIGSEGV)
18 if [ $1 -ne 0 ]; then
19 incr "finish.crash"
20 fi
21
22
23 # ensure that we kill the entire process group.
24 # When sv force-restart runs, it will try to TERM the runit processes. If
25 # this doesn't work, it will kill (-9) the process. In case of haproxy,
26 # apache, gunicorn, etc., the master process will be killed (-9). Child processes
27 # (ie apache workers, gunicorn workers) will *not* be killed and will be
28 # around for minutes (if not hours). These child workers will keep
29 # listening on the socket, preventing the new master apache/gunicorn
30 # processes from binding to the socket. The new master process will keep
31 # crashing and be restarted by runit until the old child processes are
32 # gone.
33
34 # determine the process group id. it's the group id of the current (finish) proces.
35 PGID=$(ps -o pgid= $$ | grep -o [0-9]*)
36 # kill all processes, except ourself and the PGID (which is the main process)
37 kill $(pgrep -g $PGID | egrep -v "$PGID|$$" ) || true
38 sleep 2
39 # kill -9 to be sure
40 kill -9 $(pgrep -g $PGID | egrep -v "$PGID|$$" ) || true
41
42 echo "$(date +"%Y-%m-%dT%H:%M:%S.%3NZ") process=presto status=finished" >&3
43 incr "finish.count"
44 timing "finish.duration"
答案 0 :(得分:0)
我们的持续拉式部署(基于盐)在某些情况下(依赖项或配置更改)重新启动了presto服务器进程。这是不希望的,并且删除了无意和相关的listen_in部分。