我创建了以下索引:
{
"settings":{
"number_of_shards":1,
"number_of_replicas":0,
"blocks":{
"read_only_allow_delete":false,
"read_only":false
},
"analysis":{
"filter":{
"autocomplete_filter":{
"type":"ngram",
"min_gram":3,
"max_gram":30
}
},
"analyzer":{
"autocomplete":{
"type":"custom",
"tokenizer":"standard",
"filter":[
"lowercase",
"autocomplete_filter"
]
}
}
}
},
"mappings":{
"movie":{
"properties":{
"title":{
"type":"text"
},
"actors":{
"type":"nested",
"include_in_all":true,
"properties":{
"name":{
"type":"text",
"analyzer":"autocomplete",
"search_analyzer": "standard"
},
"age":{
"type":"long",
"index":"false"
}
}
}
}
}
}
}
我已经通过_bulk端点插入了以下数据:
{"index":{"_index":"movies","_type":"movie","_id":1}}
{"title":"Ocean's 11", "actors":[{"name":"Brad Pitt","age":54}, {"name":"George Clooney","age":56}, {"name":"Julia Roberts","age":50}, {"name":"Andy Garcia","age":61}]}
{"index":{"_index":"movies","_type":"movie","_id":2}}
{"title":"Usual suspects", "actors":[{"name":"Kevin Spacey","age":58}, {"name":"Benicio del Toro","age":50}]}
{"index":{"_index":"movies","_type":"movie","_id":3}}
{"title":"Fight club", "actors":[{"name":"Brad Pitt","age":54}, {"name":"Edward Norton","age":48}, {"name":"Helena Bonham Carter","age":51}, {"name":"Jared Leto","age":46}]}
{"index":{"_index":"movies","_type":"movie","_id":24}
{"title":"Fight club", "actors":[{"name":"Brad Garrett","age":57}, {"name":"Ben Stiller","age":52}, {"name":"Robin Williams","age":63}]}
现在我想按照演员名称搜索索引。例如,当我搜索 brad 时,我得到的所有电影都有一个名为brad的演员,这很好。
但是当我搜索 rad p 时,我只想和Brad Pitt一起拍电影,而不是Brad Garrett,但我得到的是Brad Garrett。 这是我的搜索查询:
{
"query":{
"nested":{
"path":"actors",
"query":{
"match":{
"actors.name":{
"query":"rad p",
"analyzer":"standard"
}
}
},
"inner_hits":{
}
}
}
}
我正在呼叫的端点是
/电影/电影/ _search?漂亮
我的问题是,如何正确实现上述功能?
由于
BTW elasticsearch版本是6.1.0。
答案 0 :(得分:0)
这是因为standard
标记生成器会根据空格和标点符号将输入拆分为标记,因此Brad Pitt
变为brad
和pitt
因此您不会有一个rad p
的令牌。
您需要做的是将标记生成器更改为(例如)keyword
,以便将完整输入视为一个标记,然后您可以应用ngram。
或者更简单,您只需使用ngram tokenizer而不是令牌过滤器
答案 1 :(得分:0)
正如Val所说,你必须使用nGram tokenizer来做到这一点,我还必须将我的搜索查询更改为:
{
"query":{
"nested":{
"path":"searchable",
"query":{
"bool":{
"must":{
"match":{
"searchable.searchKeyword":{
"query":"%1$s"
}
}
}
}
},
"inner_hits":{
}
}
}
}
我的新索引nGram tokenizer:
{
"number_of_shards":1,
"number_of_replicas":0,
"blocks":{
"read_only_allow_delete":false,
"read_only":false
},
"analysis":{
"analyzer":{
"autocomplete":{
"tokenizer":"search_tokenizer",
"filter":[
"lowercase",
"asciifolding"
]
}
},
"tokenizer":{
"search_tokenizer":{
"type":"ngram",
"token_chars":[
"letter",
"digit",
"whitespace",
"punctuation",
"symbol"
],
"min_gram":3,
"max_gram":30
}
}
}
}