elasticsearch中的icu_tokenizer似乎在遇到重音字符(例如Č
)时将一个单词分成几个段,并且还返回奇怪的数字tokes。实施例
GET /_analyze?text=OBČERSTVENÍ&tokenizer=icu_tokenizer
返回
"tokens": [
{
"token": "OB",
"start_offset": 0,
"end_offset": 2,
"type": "<ALPHANUM>",
"position": 1
},
{
"token": "268",
"start_offset": 4,
"end_offset": 7,
"type": "<NUM>",
"position": 2
},
{
"token": "ERSTVEN",
"start_offset": 8,
"end_offset": 15,
"type": "<ALPHANUM>",
"position": 3
}
]
}
我不知道捷克语,但快速谷歌建议OBČERSTVENÍ是一个单词。有没有办法配置弹性搜索才能正常使用捷克语?
我已尝试使用icu_noramlizer,如下所示,但它没有帮助
PUT /my_index_cz
{
"settings": {
"analysis": {
"analyzer": {
"my_analyzer": {
"char_filter": ["icu_normalizer"],
"tokenizer": "icu_tokenizer"
}
}
}
}
}
GET /my_index_cz/_analyze?text=OBČERSTVENÍ&analyzer=my_analyzer