分布式跟踪和弹性堆栈可视化

时间:2019-05-31 00:23:26

标签: spring-boot spring-cloud elastic-stack zipkin

2019-05-31 05:31:42.667 DEBUG [currency-conversion,62132b44a444425e,62132b44a444425e,true] 35973 --- [nio-9090-exec-1] o.s.web.servlet.DispatcherServlet        : GET "/convert/4/to/5", parameters={}

这是我控制台中的日志格式。我正在使用Spring Cloud Stream将日志从应用程序传输到Logstash。这是Logstash中的日志解析格式

grok {
              match => { "message" => "%{TIMESTAMP_ISO8601:timestamp}\s+%{LOGLEVEL:severity}\s+\[%{DATA:service},%{DATA:trace},%{DATA:span},%{DATA:exportable}\]\s+%{DATA:pid}\s+---\s+\[%{DATA:thread}\]\s+%{DATA:class}\s+:\s+%{GREEDYDATA:rest}" }
       }

根据我的格式化程序,这里的输出与预期不符。我的输出是

[{"traceId":"62132b44a444425e","id":"62132b44a444425e","kind":"SERVER","name":"get 
/convert/{from}/to/{to}","timestamp":1559260902653718,"duration":148977,"localEndpoint":{"serviceName":"currency-

conversion","ipv4":"192.168.xx.xxx"},"remoteEndpoint":
{"ipv6":"::1","port":55394},"tags":

{"http.method":"GET","http.path":"/convert/4/to/5","mvc.controller.class
":"Controller","mvc.controller.method":"convert"}}]

我看到严重性,线程名等字段丢失了。我再次在主题zipkin上尝试使用kafka控制台使用者,获得了相同的输出,所以为什么我的日志和用于跟踪的日志发送不同。我做错了。我希望以这种格式登录kibana kibana visualisation

1 个答案:

答案 0 :(得分:0)

当我使用Kibana内置的Grok调试器时(在开发工具下),我从示例日志和grok模式中得到以下结果:

{
  "severity": "DEBUG",
  "rest": "GET \"/convert/4/to/5\", parameters={}",
  "pid": "35973",
  "thread": "nio-9090-exec-1",
  "trace": "62132b44a444425e",
  "exportable": "true",
  "service": "currency-conversion",
  "class": "o.s.web.servlet.DispatcherServlet",
  "timestamp": "2019-05-31 05:31:42.667",
  "span": "62132b44a444425e"
}

对我来说这看起来很正确。那么缺少的部分是什么?

还要显示的日志输出包含"ipv4":"192.168.xx.xxx"},"remoteEndpoint": {"ipv6":"::1","port":55394},"tags": ...,该日志不在示例日志中。那是哪里来的?