我是ElasticSearch的新手,开始使用ElasticSearch 1.7.3作为Logstash-ElasticSearch-Kibana部署的一部分。
我为我的日志消息定义了一个映射模板,这是有趣的部分:
{
"template" : "logstash-*",
"settings" : { "index.refresh_interval" : "5s" },
"mappings" : {
"_default_" : {
"_all" : {"enabled" : true, "omit_norms" : true},
"dynamic_templates" : [ {
"date_fields" : {
"match" : "*",
"match_mapping_type" : "date",
"mapping" : { "type" : "date", "doc_values" : true }
}
}],
"properties" : {
"@version" : { "type" : "string", "index" : "not_analyzed" },
"@timestamp" : { "type" : "date", "format" : "dateOptionalTime" },
"message" : { "type" : "string" }
}
} ,
"my_log" : {
"_all" : { "enabled" : true, "omit_norms" : true },
"dynamic_templates" : [ {
"date_fields" : {
"match" : "*",
"match_mapping_type" : "date",
"mapping" : { "type" : "date", "doc_values" : true }
}
}],
"properties" : {
"@timestamp" : { "type" : "date", "format" : "dateOptionalTime" },
"file" : { "type" : "string" },
"message" : { "type" : "string" }
"geolocation" : { "type" : "string" },
}
}
}
}
虽然@timestamp字段被定义为doc_value:true
但我有一个MemoryException错误,因为它是一个fielddata:
[FIELDDATA]数据太大,[@timestamp]的数据会大于 限制[633785548 / 604.4 mb]
注意:
我知道我可以更改内存或向群集中添加更多节点,但在我看来,这是一个设计问题,不应该在内存中索引此字段。