我正在尝试使用logstash通过logstash将数据从kafka发送到s3,并且我在logstash进程中得到一个SIGTERM,没有明显的错误消息。
我正在使用以下头盔模板override.yaml文件。
# overrides stable/logstash helm templates
inputs:
main: |-
input {
kafka{
bootstrap_servers => "kafka.system.svc.cluster.local:9092"
group_id => "kafka-s3"
topics => "device,message"
consumer_threads => 3
codec => json { charset => "UTF-8" }
decorate_events => true
}
}
# time_file default = 15 minutes
# size_file default = 5242880 bytes
outputs:
main: |-
output {
s3 {
codec => "json"
prefix => "kafka/%{+YYYY}/%{+MM}/%{+dd}/%{+HH}-%{+mm}"
time_file => 5
size_file => 5242880
region => "ap-northeast-1"
bucket => "logging"
canned_acl => "private"
}
}
podAnnotations: {
iam.amazonaws.com/role: kafka-s3-rules
}
image:
tag: 7.1.1
我的AWS IAM角色应通过iam2kube附加到容器。该角色本身允许对S3执行所有操作。
我的S3存储桶具有以下策略:
{
"Version": "2012-10-17",
"Id": "LoggingBucketPolicy",
"Statement": [
{
"Sid": "Stmt1554291237763",
"Effect": "Allow",
"Principal": {
"AWS": "636082426924"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::logging/*"
}
]
}
容器的日志如下。
2019/06/13 10:31:15 Setting 'path.config' from environment.
2019/06/13 10:31:15 Setting 'queue.max_bytes' from environment.
2019/06/13 10:31:15 Setting 'queue.drain' from environment.
2019/06/13 10:31:15 Setting 'http.port' from environment.
2019/06/13 10:31:15 Setting 'http.host' from environment.
2019/06/13 10:31:15 Setting 'path.data' from environment.
2019/06/13 10:31:15 Setting 'queue.checkpoint.writes' from environment.
2019/06/13 10:31:15 Setting 'queue.type' from environment.
2019/06/13 10:31:15 Setting 'config.reload.automatic' from environment.
OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j2.properties
[2019-06-13T10:31:38,061][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2019-06-13T10:31:38,078][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"7.1.1"}
[2019-06-13T10:32:02,882][WARN ][logstash.runner ] SIGTERM received. Shutting down.
总有没有要获取更详细的日志,还是有人知道我在处理什么? 我非常感谢您的任何帮助或建议! :no_mouth:
答案 0 :(得分:0)
通过查看logstash的窗格详细信息,我能够确定问题所在。我的输入类似于以下内容。
I0414 19:41:24.402257 3338 prober.go:104] Liveness probe for "mypod:mycontainer" failed (failure): Get http://10.168.0.3:80/: dial tcp 10.168.0.3:80: connection refused
它为活动性探测指定了“拒绝连接”,并在50〜60秒的正常运行时间后重新启动了Pod。
查看头盔图表Values.yaml
中的活动性探针,它将显示以下设置。
...
livenessProbe:
httpGet:
path: /
port: monitor
initialDelaySeconds: 20
# periodSeconds: 30
# timeoutSeconds: 30
# failureThreshold: 6
# successThreshold: 1
...
仅设置了InitialDelaySeconds
,因此其他设置应为Kubernetes默认值,如下图所示here。
# periodSeconds: 10
# timeoutSeconds: 1
# failureThreshold: 1
# successThreshold: 3
这表明以下付出或花费几秒钟:
+------+-----------------------------+
| Time | Event |
+------+-----------------------------+
| 0s | Container created |
| 20s | First liveness probe |
| 21s | First liveness probe fails |
| 31s | Second liveness probe |
| 32s | Second liveness probe fails |
| 42s | Third liveness probe |
| 43s | Third liveness probe fails |
| 44s | Send SIGTERM to application |
+------+-----------------------------+
在进行一些故障排除以找到正确的InitialDelaySeconds
值之后,我将以下内容放入了override.yaml
文件中以解决此问题。
livenessProbe:
initialDelaySeconds: 90
根据所使用的插件,Logstash可能不响应HTTP请求达100秒以上。