无法为从logstash JDBC输入插件创建的索引创建elasticsearch&#s; sppings

时间:2018-06-11 13:54:32

标签: java elasticsearch logstash kibana elastic-stack

我正在尝试为elasticsearch function CallbackFunction(event) { if (event.currentTarget.performance.navigation.type == 1) { console.log("refreshing page"); } } document.onbeforeunload = CallbackFunction; 创建mappings。当我使用以下查询创建索引时,我可以应用index

请在弹性搜索中找到创建索引的查询

mappings

对于上面创建的索引,我可以应用put index/profile/1 { "firstname" : "Karthik", "lastname" : "AS", "address" : "4/167, SouthExtn, shanmuga nagar, NA", "Skill" : "Java, JEE, ReactJS, ActiveMQ, ElasticSearch", "filename" : "My_second_file_created_at_2012.01.13.pdf" } 并能够成功搜索。请查看以下mappings详细信息

mappings

但在实际场景中,我通过logstash JDBC输入插件在elasticsearch中创建索引。能够在elasticsearch中创建索引,但是一旦索引在弹性搜索中通过logstash创建,默认情况下 PUT /documents_test8 { "settings" : { "analysis" : { "analyzer" : { "filename_search" : { "tokenizer" : "filename", "filter" : ["lowercase"] }, "filename_index" : { "tokenizer" : "filename", "filter" : ["lowercase","edge_ngram"] } }, "tokenizer" : { "filename" : { "pattern" : "[^\\p{L}\\d]+", "type" : "pattern" } }, "filter" : { "edge_ngram" : { "side" : "front", "max_gram" : 20, "min_gram" : 1, "type" : "edgeNGram" } } } }, "mappings" : { "doc" : { "properties" : { "filename" : { "type" : "text", "search_analyzer" : "filename_search", "index_analyzer" : "filename_index" } } } } } 也会为该索引创建索引(对于所有字段)。在此之后我无法应用mappings,显示mappings错误。如果我尝试删除该索引并执行此index [documents_test9/P07B6_6mRqmH9IP-UaCjrw] already exists,则会出现mapping错误。

不确定,如何通过logstash JDBC输入插件创建索引时应用Failed to parse mapping [doc]: No handler for type [string] declared on field [filename]

1 个答案:

答案 0 :(得分:1)

如果我正确理解了这个问题,你可以使用带有通配符的index template,这样任何在通配符中包含名称匹配的新索引都将默认使用给定的索引模板。

使用下面的模板,您添加的任何包含名称文档*即documents1documents_test8等的索引将默认使用给定的索引模板。

 PUT _template/documents
{
  "template": "documents*",
   "settings" : {
      "analysis" : {
         "analyzer" : {
            "filename_search" : {
               "tokenizer" : "filename",
               "filter" : ["lowercase"]
            },
            "filename_index" : {
               "tokenizer" : "filename",
               "filter" : ["lowercase","edge_ngram"]
            }
         },
         "tokenizer" : {
            "filename" : {
               "pattern" : "[^\\p{L}\\d]+",
               "type" : "pattern"
            }
         },
         "filter" : {
            "edge_ngram" : {
               "side" : "front",
               "max_gram" : 20,
               "min_gram" : 1,
               "type" : "edgeNGram"
            }
         }
      }
   },
   "mappings" : {
      "doc" : {
         "properties" : {
            "filename" : {
               "type" : "text",
               "search_analyzer" : "filename_search",
               "index_analyzer" : "filename_index"
            }
         }
      }
   }
}