OpenShift聚合日志记录:解析Apache访问日志

时间:2017-03-20 08:32:21

标签: openshift-origin fluentd

使用OpenShift Aggregated Logging 时,我将日志很好地输入到elasticsearch中。但是,apache记录的行最终会出现在message字段中。

我想在Kibana中创建查询,我可以单独访问url,状态代码和其他字段。为此,需要完成特殊的apache访问日志解析。

我该怎么做?

这是kibana中的示例条目:

{
  "_index": "42-steinbruchsteiner-staging.3af0bedd-eebc-11e6-af4b-005056a62fa6.2017.03.29",
  "_type": "fluentd",
  "_id": "AVsY3aSK190OXhxv4GIF",
  "_score": null,
  "_source": {
    "time": "2017-03-29T07:00:25.595959397Z",
    "docker_container_id": "9f4fa85a626d2f5197f0028c05e8e42271db7a4c674cc145204b67b6578f3378",
    "kubernetes_namespace_name": "42-steinbruchsteiner-staging",
    "kubernetes_pod_id": "56c61b65-0b0e-11e7-82e9-005056a62fa6",
    "kubernetes_pod_name": "php-app-3-weice",
    "kubernetes_container_name": "php-app",
    "kubernetes_labels_deployment": "php-app-3",
    "kubernetes_labels_deploymentconfig": "php-app",
    "kubernetes_labels_name": "php-app",
    "kubernetes_host": "itsrv1564.esrv.local",
    "kubernetes_namespace_id": "3af0bedd-eebc-11e6-af4b-005056a62fa6",
    "hostname": "itsrv1564.esrv.local",
    "message": "10.1.3.1 - - [29/Mar/2017:01:59:21 +0200] "GET /kwf/status/health HTTP/1.1" 200 2 "-" "Go-http-client/1.1"\n",
    "version": "1.3.0"
  },
  "fields": {
    "time": [
      1490770825595
    ]
  },
  "sort": [
    1490770825595
  ]
}

1 个答案:

答案 0 :(得分:0)

免责声明:我没有在openshift中对此进行测试。我不知道你用于微服务的技术堆栈。

这就是我在Kubernetes中部署Spring启动应用程序(带有logback)的方法。

1。使用logstash编码器进行回溯(这将以JSON格式写入更多ELK堆栈友好的日志)

我有一个gradle依赖项来启用此

Public Class Form1

    Private Sub Button1_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Button1.Click
        Dim x As Integer
        For x = 1 To 10 Step (x)
            x = x + 1
            MsgBox(x)
        Next
    End Sub

End Class

然后在logback-spring.groovy / logback-spring.xml(或logabck.xml)中将logstashEncoder配置为appender中的编码器

2. 有一些过滤器或库来编写访问日志

For 2.使用

甲。使用“net.rakugakibox.springbootext:spring-boot-ext-logback-access:1.6”library

(这就是我正在使用的)

它提供了一个很好的json格式,如下所示

compile "net.logstash.logback:logstash-logback-encoder:3.5"

B。使用Logback的Tee Filter

C。 Spring的CommonsRequestLoggingFilter(没有真正测试过这个)

添加bean定义

{  
   "@timestamp":"2017-03-29T09:43:09.536-05:00",
   "@version":1,
   "@message":"0:0:0:0:0:0:0:1 - - [2017-03-29T09:43:09.536-05:00] \"GET /orders/v1/items/42 HTTP/1.1\" 200 991",
   "@fields.method":"GET",
   "@fields.protocol":"HTTP/1.1",
   "@fields.status_code":200,
   "@fields.requested_url":"GET /orders/v1/items/42 HTTP/1.1",
   "@fields.requested_uri":"/orders/v1/items/42",
   "@fields.remote_host":"0:0:0:0:0:0:0:1",
   "@fields.HOSTNAME":"0:0:0:0:0:0:0:1",
   "@fields.content_length":991,
   "@fields.elapsed_time":48,
   "HOSTNAME":"ABCD"
}

然后将org.springframework.web.filter.CommonsRequestLoggingFilter设置为DEBUG,可以使用 @Bean public CommonsRequestLoggingFilter requestLoggingFilter() { CommonsRequestLoggingFilter crlf = new CommonsRequestLoggingFilter(); crlf.setIncludeClientInfo(true); crlf.setIncludeQueryString(true); crlf.setIncludePayload(true); return crlf; } 添加:

application.properties