我的队列快满了,我在日志文件中看到了这个错误:
[2018-05-16T00:01:33,334][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"2018.05.15-el-mg_papi-prod", :_type=>"doc", :_routing=>nil}, #<LogStash::Event:0x608d85c1>], :response=>{"index"=>{"_index"=>"2018.05.15-el-mg_papi-prod", "_type"=>"doc", "_id"=>"mHvSZWMB8oeeM9BTo0V2", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse [papi_request_json.query.disableFacets]", "caused_by"=>{"type"=>"i_o_exception", "reason"=>"Current token (VALUE_TRUE) not numeric, can not use numeric value accessors\n at [Source: org.elasticsearch.common.bytes.BytesReference$MarkSupportingStreamInputWrapper@56b8442f; line: 1, column: 555]"}}}}}
[2018-05-16T00:01:37,145][INFO ][org.logstash.beats.BeatsHandler] [local: 0:0:0:0:0:0:0:1:5000, remote: 0:0:0:0:0:0:0:1:50222] Handling exception: org.logstash.beats.BeatsParser$InvalidFrameProtocolException: Invalid Frame Type, received: 69
[2018-05-16T00:01:37,147][INFO ][org.logstash.beats.BeatsHandler] [local: 0:0:0:0:0:0:0:1:5000, remote: 0:0:0:0:0:0:0:1:50222] Handling exception: org.logstash.beats.BeatsParser$InvalidFrameProtocolException: Invalid Frame Type, received: 84
...
[2018-05-16T15:28:09,981][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 403 ({"type"=>"cluster_block_exception", "reason"=>"blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];"})
[2018-05-16T15:28:09,982][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 403 ({"type"=>"cluster_block_exception", "reason"=>"blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];"})
[2018-05-16T15:28:09,982][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 403 ({"type"=>"cluster_block_exception", "reason"=>"blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];"})
如果我理解第一个警告,问题在于映射。我的队列Logstash文件夹中有很多文件。我的问题是:
[INFO ][org.logstash.beats.BeatsHandler]
日志?[INFO ][logstash.outputs.elasticsearch]
只是重试处理logstash队列的日志?在所有服务器上都是FIlebeat 6.2.2。谢谢你的帮助。
答案 0 :(得分:0)
队列中的所有页面都可以删除,但这不是正确的解决方案。就我而言,队列已满,因为存在具有不同索引映射的事件。在Elasticsearch 6中,您无法将具有不同映射的文档发送到同一索引,因此日志会因为此日志而堆叠在队列中(即使只有一个错误事件,所有其他事件也不会被处理)。那么如何处理你可以处理的所有数据跳错了?解决方案是配置DLQ(dead letter queue)。响应代码为400或404的每个事件都将移至DLQ,以便其他事件可以进行处理。来自DLQ的数据可以稍后通过管道进行处理。
错误日志&#34;错误&#34; =&gt; {"type"=>"mapper_parsing_exception", ..... }
可以确定错误的映射。要指定具有错误映射的确切位置,您必须比较事件和索引的映射。
[INFO ][org.logstash.beats.BeatsHandler]
是由Nagios服务器引起的。检查不包含有效请求,这就是处理异常的原因。检查应测试Logstash服务是否处于活动状态。现在,我在localhost:9600
上查看Logstas服务,了解更多信息here。
[INFO ][logstash.outputs.elasticsearch]
表示Logstash尝试处理队列但索引被锁定([FORBIDDEN/12/index read-only / allow delete (api)]
),因为索引设置为只读状态。 Elasticsearch,当服务器上没有足够的空间时,会自动将索引配置为只读。这可以通过cluster.routing.allocation.disk.watermark.low
进行更改,以获取更多信息here。