流利的日志记录驱动程序发送非结构化日志消息

时间:2019-03-12 14:47:40

标签: fluentd

我的环境中有一个设置,其中将Docker容器日志转发到fluentd,然后将fluentd转发到splunk。

我对流利性有疑问,某些Docker容器日志不是结构化格式。从文档中,我看到: 流利的日志驱动程序在结构化日志消息中发送以下元数据:

container_id, container_name, 资源, 日志

我的问题是很少有具有非结构化元数据信息的日志: 例如: 日志1:

{"log":"2019/03/12 13:59:49 [info] 6#6: *2425596 client closed connection while waiting for request, client: 10.17.84.12, server: 0.0.0.0:80","container_id":"789459f8f8a52c8b4b","container_name":"testingcontainer-1ed-fwij4-EcsTaskDefinition-1TF1DH,"source":"stderr"}

日志2:

{"container_id":"26749a26500dd04e92fc","container_name":"/4C4DTHQR2V6C-EcsTaskDefinition-1908NOZPKPKY0-1","source":"stdout","log":"\u001B[0mGET \u001B[32m200 \u001B[0m0.634 ms - -\u001B[0m"}

这两个日志的元数据信息顺序不同(log1- [log,conatiner-name,container_id,source])(log2- [container_id,conatiner-name,source,log])。因此,我遇到了一些问题。我该如何解决以获取相同顺序的元数据信息?

我的fluend配置文件是

<source>
  @type  forward
  @id    input1
  @label @mainstream
  @log_level trace
  port  24224
</source>

<label @mainstream>

<match *.**>
  type copy
  <store>
    @type file
    @id   output_docker1
    path         /fluentd/log/docker.*.log
    symlink_path /fluentd/log/docker.log
    append       true
    time_slice_format %Y%m%d
    time_slice_wait   1m
    time_format       %Y%m%dT%H%M%S%z
    utc
    buffer_chunk_limit 512m
  </store>
  <store>
   @type s3
   @id   output_docker2
   @log_level trace

   s3_bucket bucketwert-1
   s3_region us-east-1
   path logs/
   buffer_path /fluentd/log/docker.log
   s3_object_key_format %{path}%{time_slice}_sbx_docker_%{index}.%{file_extension}
   flush_interval 3600s
   time_slice_format %Y%m%d
   time_format       %Y%m%dT%H%M%S%z
   utc
   buffer_chunk_limit 512m
  </store>
</match>
</label>

1 个答案:

答案 0 :(得分:0)

fluent-plugin-record-sort怎么样?

或者,如果您知道记录中的所有键,则可以像下面一样使用built-in record_trandformer plugin

<source>
  @type dummy
  tag dummy
  dummy [
    {"log": "log1", "container_id": "123", "container_name": "name1", "source": "stderr"},
    {"container_id": "456", "container_name": "name2", "source": "stderr", "log": "log2"}
  ]
</source>

<filter dummy>
  @type record_transformer
  renew_record true
  keep_keys log,container_id,container_name,source
</filter>

<match dummy>
  @type stdout
</match>

更新(未测试):

<source>
  @type  forward
  @id    input1
  @label @mainstream
  @log_level trace
  port  24224
</source>

<label @mainstream>
<filter>
  @type record_transformer
  renew_record true
  keep_keys log,container_id,container_name,source
</filter>
<match *.**>
  @type copy
  <store>
    @type file
    @id   output_docker1
    path         /fluentd/log/docker.*.log
    symlink_path /fluentd/log/docker.log
    append       true
    time_slice_format %Y%m%d
    time_slice_wait   1m
    time_format       %Y%m%dT%H%M%S%z
    utc
    buffer_chunk_limit 512m
  </store>
  <store>
   @type s3
   @id   output_docker2
   @log_level trace

   s3_bucket bucketwert-1
   s3_region us-east-1
   path logs/
   buffer_path /fluentd/log/docker.log
   s3_object_key_format %{path}%{time_slice}_sbx_docker_%{index}.%{file_extension}
   flush_interval 3600s
   time_slice_format %Y%m%d
   time_format       %Y%m%dT%H%M%S%z
   utc
   buffer_chunk_limit 512m
  </store>
</match>
</label>