如何使用logstash在elasticsearch中查看java堆栈跟踪?

时间:2017-05-23 07:50:18

标签: elasticsearch logstash logstash-grok

我有这样的日志

    [2017-05-18 00:00:05,871][INFO ][cluster.metadata         ] [esndata-2] [.data-es-1-2017.05.18] creating index, cause [auto(bulk api)], templates [.data
-es-1], shards [1]/[1], mappings [_default_, shards, node, index_stats, index_recovery, cluster_state, cluster_stats, node_stats, indices_stats]
    [2017-05-18 00:00:06,161][INFO ][cluster.routing.allocation] [esndata-2] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[.data-es-1-2017.05.18][0]] ...]).
    [2017-05-18 00:00:06,249][INFO ][cluster.metadata         ] [esndata-2] [.data-es-1-2017.05.18] update_mapping [node_stats]
    [2017-05-18 00:00:06,290][INFO ][cluster.routing.allocation] [esndata-2] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.data-es-1-2017.05.18][0]] ...]).
    [2017-05-18 00:00:06,339][DEBUG][action.admin.indices.create] [esndata-2] [data-may-2017,data-apr-2017,data-mar-2017] failed to create
    [data-may-2017,data-apr-2017,data-mar-2017] InvalidIndexNameException[Invalid index name [data-may-2017,data-apr-2017,data-mar-2017], must not contain the following characters [\, /, *, ?, ", <, >, |,  , ,]]
            at org.elasticsearch.cluster.metadata.MetaDataCreateIndexService.validateIndexName(MetaDataCreateIndexService.java:142)
            at org.elasticsearch.cluster.metadata.MetaDataCreateIndexService.validate(MetaDataCreateIndexService.java:431)
            at org.elasticsearch.cluster.metadata.MetaDataCreateIndexService.access$100(MetaDataCreateIndexService.java:95)
            at org.elasticsearch.cluster.metadata.MetaDataCreateIndexService$1.execute(MetaDataCreateIndexService.java:190)
            at org.elasticsearch.cluster.ClusterStateUpdateTask.execute(ClusterStateUpdateTask.java:45)
            at org.elasticsearch.cluster.service.InternalClusterService.runTasksForExecutor(InternalClusterService.java:468)
            at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:772)
            at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:231)
            at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:194)
            at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
            at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
            at java.lang.Thread.run(Thread.java:745)

我的logstash配置就像这样

 input {
          file {
            path => "F:\logstash-2.4.0\logstash-2.4.0\bin\dex.txt"
            start_position => "beginning"
            codec => multiline {
            pattern => "^%{TIMESTAMP_ISO8601} "
            negate => true
            what => previous
            }
          }
        }

filter {
    grok {
     match => [ 
       "message", "(?m)^%{TIMESTAMP_ISO8601:TIMESTAMP}\[%{LOGLEVEL:LEVEL}%{SPACE}\]\[%{DATA:ERRORTYPE}\]%{SPACE}\[%{DATA:SERVERNAME}\]%{SPACE}\[%{DATA:INDEX-NAME}\]%{SPACE}%{GREEDYDATA:mydata}",
       "message", "^%{TIMESTAMP_ISO8601:TIMESTAMP}\[%{LOGLEVEL:LEVEL}%{SPACE}\]\[%{DATA:ERRORTYPE}\]%{SPACE}\[%{DATA:SERVERNAME}\]%{SPACE}%{GREEDYDATA:mydata}" 
     ]
 }
    date {
    match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z"]
 }
}

output {
  stdout { codec => rubydebug}
}

这是我使用上述配置时得到的输出:

{
    "@timestamp" => "2017-05-24T06:25:11.245Z",
       "message" => "[2017-05-18 00:00:05,871][INFO ][cluster.metadata         ]
[esndata-2] [.data-es-1-2017.05.18] creating index, cause [auto(bulk api)], tem
plates [.data\r\n-es-1], shards [1]/[1], mappings [_default_, shards, node, inde
x_stats, index_recovery, cluster_state, cluster_stats, node_stats, indices_stats
]\r\n    [2017-05-18 00:00:06,161][INFO ][cluster.routing.allocation] [esndata-2
] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started
[[.data-es-1-2017.05.18][0]] ...]).\r\n    [2017-05-18 00:00:06,249][INFO ][clus
ter.metadata         ] [esndata-2] [.data-es-1-2017.05.18] update_mapping [node_
stats]\r\n    [2017-05-18 00:00:06,290][INFO ][cluster.routing.allocation] [esnd
ata-2] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards s
tarted [[.data-es-1-2017.05.18][0]] ...]).\r\n    [2017-05-18 00:00:06,339][DEBU
G][action.admin.indices.create] [esndata-2] [data-may-2017,data-apr-2017,data-ma
r-2017] failed to create\r\n    [data-may-2017,data-apr-2017,data-mar-2017] Inva
lidIndexNameException[Invalid index name [data-may-2017,data-apr-2017,data-mar-2
017], must not contain the following characters [\\, /, *, ?, \", <, >, |,  , ,]
]\r\n            at org.elasticsearch.cluster.metadata.MetaDataCreateIndexServic
e.validateIndexName(MetaDataCreateIndexService.java:142)\r\n            at org.e
lasticsearch.cluster.metadata.MetaDataCreateIndexService.validate(MetaDataCreate
IndexService.java:431)\r\n            at org.elasticsearch.cluster.metadata.Meta
DataCreateIndexService.access$100(MetaDataCreateIndexService.java:95)\r\n
     at org.elasticsearch.cluster.metadata.MetaDataCreateIndexService$1.execute(
MetaDataCreateIndexService.java:190)\r\n            at org.elasticsearch.cluster
.ClusterStateUpdateTask.execute(ClusterStateUpdateTask.java:45)\r\n            a
t org.elasticsearch.cluster.service.InternalClusterService.runTasksForExecutor(I
nternalClusterService.java:468)\r\n            at org.elasticsearch.cluster.serv
ice.InternalClusterService$UpdateTask.run(InternalClusterService.java:772)\r\n
          at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExe
cutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor
.java:231)\r\n            at org.elasticsearch.common.util.concurrent.Prioritize
dEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPool
Executor.java:194)\r\n            at java.util.concurrent.ThreadPoolExecutor.run
Worker(ThreadPoolExecutor.java:1142)\r\n            at java.util.concurrent.Thre
adPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\r\n            at java.la
ng.Thread.run(Thread.java:745)\r\n\r",
      "@version" => "1",
          "tags" => [
        [0] "multiline",
        [1] "_grokparsefailure"
    ],
          "path" => "D:\\logstash\\logstash-2.4.0\\bin\\error.txt",
          "host" => "PC326815"
}

我使用了链接https://gist.github.com/wiibaa/c47e5f79d45d58d05121

如何在不添加所有内容的情况下解析日志?

谢谢

1 个答案:

答案 0 :(得分:0)

问题在于我在输入中提到的多线模式以及我在过滤器中提到的grok模式

我使用了以下配置:

input {
      file {
            path => "D:\logstash\logstash-2.4.0\bin\errors.txt"
            start_position => "beginning"
        codec => multiline {
            pattern => "^\[%{TIMESTAMP_ISO8601:TIMESTAMP}\]"
            negate => true
            what => "previous"
        }
  }

}
filter {
   grok {
        match => [ "message", "(?m)^\[%{TIMESTAMP_ISO8601:TIMESTAMP}\]\[%{LOGLEVEL:LEVEL}%{SPACE}\]\[%{DATA:ERRORTYPE}\]%{SPACE}\[%{DATA:SERVERNAME}\]%{SPACE}\[%{DATA:INDEX-NAME}\]%{SPACE}(?<ERRORMESSAGE>(.|\r|\n)*)"]
   }

}
output {

stdout { codec => rubydebug }

}