如何查看kibana登录

时间:2018-05-18 13:32:47

标签: elasticsearch logstash kibana elastic-stack

我是ELK的新手,我使用net.logstash.logback.appender.LogstashTcpSocketAppender尝试了使用springboot的ELK Stack。我发送了json消息到logstack。以下是我的配置 -

的logback-spring.xml

<configuration>
    <include resource="org/springframework/boot/logging/logback/defaults.xml" />
​   <springProperty scope="context" name="springAppName" source="spring.application.name" />

    <property name="LOG_FILE" value="./${springAppName}" />


    <property name="CONSOLE_LOG_PATTERN"
        value="%clr(%d{yyyy-MM-dd HH:mm:ss.SSS}){faint} %clr(${LOG_LEVEL_PATTERN:-%5p}) %clr(${PID:- }){magenta} %clr(---){faint} %clr([%15.15t]){faint} %clr(%-40.40logger{39}){cyan} %clr(:){faint} %m%n${LOG_EXCEPTION_CONVERSION_WORD:-%wEx}" />


    <appender name="logstash2"
        class="net.logstash.logback.appender.LogstashTcpSocketAppender">
        <destination>localhost:5000</destination>
        <encoder
            class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder">
            `
            <providers>
                <timestamp>
                    <timeZone>UTC</timeZone>
                </timestamp>
                <pattern>
                    <pattern>
                        {
                        "severity": "%level",
                        "service": "${springAppName:-}",
                        "trace": "%X{X-B3-TraceId:-}",
                        "span": "%X{X-B3-SpanId:-}",
                        "parent": "%X{X-B3-ParentSpanId:-}",
                        "exportable":
                        "%X{X-Span-Export:-}",
                        "pid": "${PID:-}",
                        "thread": "%thread",
                        "class": "%logger{40}",
                        "rest": "%message"
                        }
                    </pattern>
                </pattern>
            </providers>
        </encoder>
        <keepAliveDuration>5 minutes</keepAliveDuration>
    </appender>
    ​
    <root level="INFO">
        <appender-ref ref="logstash" />
    </root>
</configuration>

config.json

input{
    tcp{
        port=> 5000
        host=> localhost
    }
}   

filter {
       # pattern matching logback pattern
       grok {
              match => { "message" => "%{TIMESTAMP_ISO8601:timestamp}\s+%{LOGLEVEL:severity}\s+\[%{DATA:service},%{DATA:trace},%{DATA:span},%{DATA:exportable}\]\s+%{DATA:pid}\s+---\s+\[%{DATA:thread}\]\s+%{DATA:class}\s+:\s+%{GREEDYDATA:rest}" }
       }
}

output {
    elasticsearch { hosts => ["localhost:9200"] }
}   

但是当我打开kibana查看消息时,我将整个日志视为消息。 如下 -

enter image description here

有人可以帮助我实现以下输出 -

enter image description here

1 个答案:

答案 0 :(得分:0)

您的过滤器块应如下所示:

config = tf.ConfigProto(device_count ={'GPU': 0}) 
sess = tf.Session(config=config)

我不明白你为什么不在输出块中使用索引命名?如果您有多个索引,则会遇到问题。添加类似的东西:

synonyms("complex")

function synonyms(me){
  var url = 'https://api.datamuse.com/words?ml=' + me;
  fetch(url).then(v => v.json()).then((function(v){
    syn = JSON.stringify(v)
    syn = JSON.parse(syn)
    for(var k in syn){
      document.body.innerHTML += "<span>"+syn[k].word+"</span> "
      }
    })
  )
}