在尝试自定义仪表板时在Visualization中使用json输入时,在Kibana 4中获取错误

时间:2015-03-06 20:00:18

标签: kibana-4

我是ELK堆栈的新手,并使用elasticsearch版本1.4.4,logstash版本1.4.2和kibana版本4实现它。我能够使用logstash将csv文件拉入elasticsearch并将其显示在kibana中

当从文件中显示日期时,日期中的值被分开,好像其中包含的短划线是分隔符(例如,字段中的值是01-01-2015,当它以kibana显示时(无论显示类型)将有三个字段条目,01,01和2015)。 Kibana给出的信息是,这是因为它是一个经过分析的领域。

Kibana 4具有直接从仪表板构建器Visualization中使用json的功能,可将其更改为未分析的字段,以便使用整个字符串,而不是将其分开。

我尝试了多种格式,但这似乎应该有效,因为kibana将其识别为有效语法:

{ "index" : "not_analyzed" }

但在尝试应用更改时,仪表板不会更改其结构,而kibana会生成以下异常:

Visualize: Request to Elasticsearch failed: {"error":"SearchPhaseExecutionException[Failed to execute phase [query], all shards failed; shardFailures {[ftpEMbcOTxu0Tdf0e8i-Ig][csvtest][0]: SearchParseException[[csvtest][0]: query[ConstantScore(BooleanFilter(+cache(@timestamp:[1420092000000 TO 1451627999999])))],from[-1],size[0]: Parse Failure [Failed to parse source [{\"query\":{\"filtered\":{\"query\":{\"query_string\":{\"query\":\"*\",\"analyze_wildcard\":true}},\"filter\":{\"bool\":{\"must\":[{\"range\":{\"@timestamp\":{\"gte\":1420092000000,\"lte\":1451627999999}}}],\"must_not\":[]}}}},\"size\":0,\"aggs\":{\"2\":{\"terms\":{\"field\":\"Conn Dt\",\"size\":100,\"order\":{\"1\":\"desc\"},\"index\":\"not_analyzed\"},\"aggs\":{\"1\":{\"cardinality\":{\"field\":\"Area Cd\"}}}}}}]]]; nested: SearchParseException[[csvtest][0]: query[ConstantScore(BooleanFilter(+cache(@timestamp:[1420092000000 TO 1451627999999])))],from[-1],size[0]: Parse Failure [Unknown key for a VALUE_STRING in [2]: [index].]]; }{[ftpEMbcOTxu0Tdf0e8i-Ig][csvtest][1]: SearchParseException[[csvtest][1]: query[ConstantScore(BooleanFilter(+cache(@timestamp:[1420092000000 TO 1451627999999])))],from[-1],size[0]: Parse Failure [Failed to parse source [{\"query\":{\"filtered\":{\"query\":{\"query_string\":{\"query\":\"*\",\"analyze_wildcard\":true}},\"filter\":{\"bool\":{\"must\":[{\"range\":{\"@timestamp\":{\"gte\":1420092000000,\"lte\":1451627999999}}}],\"must_not\":[]}}}},\"size\":0,\"aggs\":{\"2\":{\"terms\":{\"field\":\"Conn Dt\",\"size\":100,\"order\":{\"1\":\"desc\"},\"index\":\"not_analyzed\"},\"aggs\":{\"1\":{\"cardinality\":{\"field\":\"Area Cd\"}}}}}}]]]; nested: SearchParseException[[csvtest][1]: query[ConstantScore(BooleanFilter(+cache(@timestamp:[1420092000000 TO 1451627999999])))],from[-1],size[0]: Parse Failure [Unknown key for a VALUE_STRING in [2]: [index].]]; }{[ftpEMbcOTxu0Tdf0e8i-Ig][csvtest][2]: SearchParseException[[csvtest][2]: query[ConstantScore(BooleanFilter(+cache(@timestamp:[1420092000000 TO 1451627999999])))],from[-1],size[0]: Parse Failure [Failed to parse source [{\"query\":{\"filtered\":{\"query\":{\"query_string\":{\"query\":\"*\",\"analyze_wildcard\":true}},\"filter\":{\"bool\":{\"must\":[{\"range\":{\"@timestamp\":{\"gte\":1420092000000,\"lte\":1451627999999}}}],\"must_not\":[]}}}},\"size\":0,\"aggs\":{\"2\":{\"terms\":{\"field\":\"Conn Dt\",\"size\":100,\"order\":{\"1\":\"desc\"},\"index\":\"not_analyzed\"},\"aggs\":{\"1\":{\"cardinality\":{\"field\":\"Area Cd\"}}}}}}]]]; nested: SearchParseException[[csvtest][2]: query[ConstantScore(BooleanFilter(+cache(@timestamp:[1420092000000 TO 1451627999999])))],from[-1],size[0]: Parse Failure [Unknown key for a VALUE_STRING in [2]: [index].]]; }{[ftpEMbcOTxu0Tdf0e8i-Ig][csvtest][3]: SearchParseException[[csvtest][3]: query[ConstantScore(BooleanFilter(+cache(@timestamp:[1420092000000 TO 1451627999999])))],from[-1],size[0]: Parse Failure [Failed to parse source [{\"query\":{\"filtered\":{\"query\":{\"query_string\":{\"query\":\"*\",\"analyze_wildcard\":true}},\"filter\":{\"bool\":{\"must\":[{\"range\":{\"@timestamp\":{\"gte\":1420092000000,\"lte\":1451627999999}}}],\"must_not\":[]}}}},\"size\":0,\"aggs\":{\"2\":{\"terms\":{\"field\":\"Conn Dt\",\"size\":100,\"order\":{\"1\":\"desc\"},\"index\":\"not_analyzed\"},\"aggs\":{\"1\":{\"cardinality\":{\"field\":\"Area Cd\"}}}}}}]]]; nested: SearchParseException[[csvtest][3]: query[ConstantScore(BooleanFilter(+cache(@timestamp:[1420092000000 TO 1451627999999])))],from[-1],size[0]: Parse Failure [Unknown key for a VALUE_STRING in [2]: [index].]]; }{[ftpEMbcOTxu0Tdf0e8i-Ig][csvtest][4]: SearchParseException[[csvtest][4]: query[ConstantScore(BooleanFilter(+cache(@timestamp:[1420092000000 TO 1451627999999])))],from[-1],size[0]: Parse Failure [Failed to parse source [{\"query\":{\"filtered\":{\"query\":{\"query_string\":{\"query\":\"*\",\"analyze_wildcard\":true}},\"filter\":{\"bool\":{\"must\":[{\"range\":{\"@timestamp\":{\"gte\":1420092000000,\"lte\":1451627999999}}}],\"must_not\":[]}}}},\"size\":0,\"aggs\":{\"2\":{\"terms\":{\"field\":\"Conn Dt\",\"size\":100,\"order\":{\"1\":\"desc\"},\"index\":\"not_analyzed\"},\"aggs\":{\"1\":{\"cardinality\":{\"field\":\"Area Cd\"}}}}}}]]]; nested: SearchParseException[[csvtest][4]: query[ConstantScore(BooleanFilter(+cache(@timestamp:[1420092000000 TO 1451627999999])))],from[-1],size[0]: Parse Failure [Unknown key for a VALUE_STRING in [2]: [index].]]; }]"} less

可以看出索引:从分析中改变为not_analyzed的位置;还有通配符分析的设置:true也被改为false,高级对象配置具有相同的结果。

2 个答案:

答案 0 :(得分:1)

尝试索引映射并将日期字段设为未分析。

例如:

"<index name>": {
      "mappings": {
         "<Mapping type>": {
            "properties": {
               "City": {
                  "type": "string",
                  "index": "not_analyzed"
               },
               "Date": {
                  "type": "string",
                  "index": "not_analyzed"
               }
           }
        }

答案 1 :(得分:0)

今天我遇到了类似的问题,其中包含以下信息:

Parse Failure [Unknown key for a VALUE_STRING in [logTime]: [offset].]]; }]

我正在使用以下有效负载向Elasticsearch 1.4.5发送日期直方图聚合请求:

['logTime'].forEach(function (field) {
    body.aggregations[field] = {
        date_histogram: {
            field: field,
            interval: 'week',
            time_zone: '+00:00',
            offset: '15h',
            min_doc_count: 0,
            extended_bounds: {
                min: 1440946800000,
                max: 1441551599999
            }
        }
    };
});

注意 offset使用date_histogram参数。此参数仅在Elasticsearch 1.5.0版中引入。所以,我的1.4.5 ES抱怨这个offset密钥是Unknown

使用post_offset替换如下解决了问题,但我也必须调整time_zone参数的值。作为旁注,自{v1.5}起,post_offset已被弃用并替换为offset

['logTime'].forEach(function (field) {
    body.aggregations[field] = {
        date_histogram: {
            field: field,
            interval: 'week',
            time_zone: '+09:00',
            post_offset: '-9h',
            min_doc_count: 0,
            extended_bounds: {
                min: 1440946800000,
                max: 1441551599999
            }
        }
    };
});