无法在ELK 5.4上正确解析时间戳

时间:2017-05-29 08:36:08

标签: elasticsearch logstash kibana elasticsearch-5 kibana-5

我正在使用最后一个ELK堆栈(5.4.0)。我正在解析一些apache日志。 弹性搜索2.4.5和kibana 4.6.4

都可以

确定版

apache 173.252.115.89 - - [29/May/2017:09:59:13 +0200] "GET /fr/fia/nodes.rss HTTP/1.1" 200 19384 "-" "facebookexternalhit/1.1" "-" 756752 "*/*" monsite.com
使用以下grok conf

完美地输入到elasticsearch中

grok {
          match => { "message" => "%{WORD:program} %{COMBINEDAPACHELOG} \"((?<x_forwarded_for>%{IP:xff_clientip}.*)|-)\" %{NUMBER:request_time:float} %{QUOTEDSTRING:accept} %{IPORHOST:targethost}"}
        }
        date {
           match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]
        }

与以下kibana conf enter image description here

问题

使用ELK 5.4,我有完全相同的消息(来自一个重复的rabbitmq队列),相同的logstash配置和一个&#39;全新安装&#39;但是我得到了

elasticsearch log

[2017-05-29T10:17:58,498][DEBUG][o.e.a.b.TransportShardBulkAction] [srv-elk-01] [logstash-2017.05.29][1] failed to execute bulk item (index) BulkShardRequest [[logstash-2017.05.29][1]] containing
 [16] requests
org.elasticsearch.index.mapper.MapperParsingException: failed to parse [timestamp]
        at org.elasticsearch.index.mapper.FieldMapper.parse(FieldMapper.java:298) ~[elasticsearch-5.4.0.jar:5.4.0]
...
at org.elasticsearch.index.mapper.DateFieldMapper.parseCreateField(DateFieldMapper.java:468) ~[elasticsearch-5.4.0.jar:5.4.0]
        at org.elasticsearch.index.mapper.FieldMapper.parse(FieldMapper.java:287) ~[elasticsearch-5.4.0.jar:5.4.0]
        ... 40 more

logstash log

[2017-05-29T10:17:58,503][WARN ][logstash.outputs.elasticsearch] Failed action. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"logstash-2017.05.29", :_type=>"syslog", :_routing=>nil}, 2017-05-29T08:17:57.000Z 212.95.67.139 apache 212.95.70.118 - - [29/May/2017:10:17:57 +0200] "GET /de/tag/opera HTTP/1.1" 200 8948 "-" "TurnitinBot (https://turnitin.com/robot/crawlerinfo.html)" "199.47.87.143, 199.47.87.143" 784504 "text/*,application/*" monsite.com], :response=>{"index"=>{"_index"=>"logstash-2017.05.29", "_type"=>"syslog", "_id"=>"AVxTSHzL1K94bfQE3eaM", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse [timestamp]", "caused_by"=>{"type"=>"illegal_argument_exception", "reason"=>"Invalid format: \"29/May/2017:10:17:57 +0200\" is malformed at \"/May/2017:10:17:57 +0200\""}}}}}

我的kibana conf是 enter image description here

2 个答案:

答案 0 :(得分:1)

最近版本的Kibana没有使用时间戳字段,不推荐使用它。 使用&#34; date&#34;在您的logstash conf文件中。 有关详细信息,请查看此处 timestamp

答案 1 :(得分:0)

解决方案是删除时间戳,如下所示

public void batchInsertMovies(List<MovieJpa> movies){
    EntityManager entityManager = EMF.get().createEntityManager();
    try{
        entityManager.getTransaction().begin();
        int i = 0;
        for(MovieJpa movie : movies){
            entityManager.persist(movie);
            i++;
            if(i == 30){
                //flush a batch of inserts and release memory
                entityManager.flush();
                entityManager.clear();
            }
        }
        entityManager.getTransaction().commit();
    } catch (Exception e){
        e.printStackTrace();
        entityManager.getTransaction().rollback();
    } finally {
        entityManager.close();
    }
}

public void batchInsertGenres(List<GenreJpa> genres){
    EntityManager entityManager = EMF.get().createEntityManager();
    try{
        entityManager.getTransaction().begin();
        int i = 0;
        for(GenreJpa genre : genres){
            entityManager.persist(genre);
            i++;
            if(i == 5){
                //flush a batch of inserts and release memory
                entityManager.flush();
                entityManager.clear();
            }
        }
        entityManager.getTransaction().commit();
    } catch (Exception e){
        e.printStackTrace();
        entityManager.getTransaction().rollback();
    } finally {
        entityManager.close();
    }
}

然后一切正常