在docker容器中的Java异常的Filebeat多行解析不起作用

时间:2016-05-25 12:53:21

标签: logging elasticsearch docker logstash filebeat

我正在运行Filebeat从一个在容器中运行的Java服务发送日志。此容器有许多其他服务正在运行,同一个Filebeat守护程序正在收集主机中运行的所有容器日志。 Filebeat将日志转发到Logstash,然后将其转储到Elastisearch。

我正在尝试使用Filebeat多行功能将Java异常中的日志行组合成一个日志条目,使用以下Filebeat配置:

filebeat:
  prospectors:
    # container logs
    -
      paths:
        - "/log/containers/*/*.log"
      document_type: containerlog
      multiline:
        pattern: "^\t|^[[:space:]]+(at|...)|^Caused by:"
        match: after

output:
  logstash:
    hosts: ["{{getv "/logstash/host"}}:{{getv "/logstash/port"}}"]

应该聚合到一个事件中的Java堆栈跟踪示例:

此Java堆栈跟踪是来自docker日志条目的副本(在运行 docker logs java_service 之后)

[2016-05-25 12:39:04,744][DEBUG][action.bulk              ] [Set] [***][3] failed to execute bulk item (index) index {[***][***][***], source[{***}}
MapperParsingException[Field name [events.created] cannot contain '.']
    at org.elasticsearch.index.mapper.object.ObjectMapper$TypeParser.parseProperties(ObjectMapper.java:273)
    at org.elasticsearch.index.mapper.object.ObjectMapper$TypeParser.parseObjectOrDocumentTypeProperties(ObjectMapper.java:218)
    at org.elasticsearch.index.mapper.object.ObjectMapper$TypeParser.parse(ObjectMapper.java:193)
    at org.elasticsearch.index.mapper.object.ObjectMapper$TypeParser.parseProperties(ObjectMapper.java:305)
    at org.elasticsearch.index.mapper.object.ObjectMapper$TypeParser.parseObjectOrDocumentTypeProperties(ObjectMapper.java:218)
    at org.elasticsearch.index.mapper.object.RootObjectMapper$TypeParser.parse(RootObjectMapper.java:139)
    at org.elasticsearch.index.mapper.DocumentMapperParser.parse(DocumentMapperParser.java:118)
    at org.elasticsearch.index.mapper.DocumentMapperParser.parse(DocumentMapperParser.java:99)
    at org.elasticsearch.index.mapper.MapperService.parse(MapperService.java:498)
    at org.elasticsearch.cluster.metadata.MetaDataMappingService$PutMappingExecutor.applyRequest(MetaDataMappingService.java:257)
    at org.elasticsearch.cluster.metadata.MetaDataMappingService$PutMappingExecutor.execute(MetaDataMappingService.java:230)
    at org.elasticsearch.cluster.service.InternalClusterService.runTasksForExecutor(InternalClusterService.java:468)
    at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:772)
    at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:231)
    at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:194)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)

尽管如此,通过上面显示的Filebeat配置,我仍然将堆栈跟踪的每一行视为Elasticsearch中的一个单独事件。

知道我做错了什么吗?另请注意,由于我需要使用filebeat从多个文件中发送日志,因此无法在Logstash端完成多行聚合。

版本

FILEBEAT_VERSION 1.1.0

1 个答案:

答案 0 :(得分:1)

今天偶然发现了这个问题。

这对我有用(filebeat.yml):

filebeat.prospectors:
- type: log
  multiline.pattern: "^[[:space:]]+(at|\\.{3})\\b|^Caused by:"
  multiline.negate: false
  multiline.match: after
  paths:
   - '/var/lib/docker/containers/*/*.log'
  json.message_key: log
  json.keys_under_root: true
  processors:
  - add_docker_metadata: ~
output.elasticsearch:
  hosts: ["es-client.es-cluster:9200"]

我使用Filebeat 6.2.2将日志直接发送到Elasticsearch