使用弹性搜索从文本中提取关键字(多字)

时间:2015-11-07 09:05:53

标签: elasticsearch

我有一个充满关键字的索引,并根据这些关键字我想从输入文本中提取关键字。

以下是示例关键字索引。请注意,关键字也可以是多个单词,或者基本上它们是唯一的标记。

{
  "hits": {
    "total": 2000,
    "hits": [
      {
        "id": 1,
        "keyword": "thousand eyes"
      },
      {
        "id": 2,
        "keyword": "facebook"
      },
      {
        "id": 3,
        "keyword": "superdoc"
      },
      {
        "id": 4,
        "keyword": "quora"
      },
      {
        "id": 5,
        "keyword": "your story"
      },
      {
        "id": 6,
        "keyword": "Surgery"
      },
      {
        "id": 7,
        "keyword": "lending club"
      },
      {
        "id": 8,
        "keyword": "ad roll"
      },
      {
        "id": 9,
        "keyword": "the honest company"
      },
      {
        "id": 10,
        "keyword": "Draft kings"
      }
    ]
  }
}

现在,如果我输入文字“我在facebook上看到了借阅俱乐部的消息,你的故事和quora”,搜索的输出应该是 [“借阅俱乐部”, “facebook”,“你的故事”,“quora”] 。搜索也应该是 case insensetive

1 个答案:

答案 0 :(得分:7)

只有一种方法可以做到这一点。您必须将数据编入索引作为关键字并使用带状疱疹进行搜索:

见这个复制品:

首先,我们将创建两个自定义分析器:关键字和带状疱疹:

PUT test
{
  "settings": {
    "analysis": {
      "analyzer": {
        "my_analyzer_keyword": {
          "type": "custom",
          "tokenizer": "keyword",
          "filter": [
            "asciifolding",
            "lowercase"
          ]
        },
        "my_analyzer_shingle": {
          "type": "custom",
          "tokenizer": "standard",
          "filter": [
            "asciifolding",
            "lowercase",
            "shingle"
          ]
        }
      }
    }
  },
  "mappings": {
    "your_type": {
      "properties": {
        "keyword": {
          "type": "string",
          "index_analyzer": "my_analyzer_keyword",
          "search_analyzer": "my_analyzer_shingle"
        }
      }
    }
  }
}

现在让我们使用您提供的内容创建一些示例数据:

POST /test/your_type/1
{
  "id": 1,
  "keyword": "thousand eyes"
}
POST /test/your_type/2
{
  "id": 2,
  "keyword": "facebook"
}
POST /test/your_type/3
{
  "id": 3,
  "keyword": "superdoc"
}
POST /test/your_type/4
{
  "id": 4,
  "keyword": "quora"
}
POST /test/your_type/5
{
  "id": 5,
  "keyword": "your story"
}
POST /test/your_type/6
{
  "id": 6,
  "keyword": "Surgery"
}
POST /test/your_type/7
{
  "id": 7,
  "keyword": "lending club"
}
POST /test/your_type/8
{
  "id": 8,
  "keyword": "ad roll"
}
POST /test/your_type/9
{
  "id": 9,
  "keyword": "the honest company"
}
POST /test/your_type/10
{
  "id": 10,
  "keyword": "Draft kings"
}

最后查询运行搜索:

POST /test/your_type/_search
{
  "query": {
    "match": {
      "keyword": "I saw the news of lending club on facebook, your story and quora"
    }
  }
}

这就是结果:

{
  "took": 6,
  "timed_out": false,
  "_shards": {
    "total": 5,
    "successful": 5,
    "failed": 0
  },
  "hits": {
    "total": 4,
    "max_score": 0.009332742,
    "hits": [
      {
        "_index": "test",
        "_type": "your_type",
        "_id": "2",
        "_score": 0.009332742,
        "_source": {
          "id": 2,
          "keyword": "facebook"
        }
      },
      {
        "_index": "test",
        "_type": "your_type",
        "_id": "7",
        "_score": 0.009332742,
        "_source": {
          "id": 7,
          "keyword": "lending club"
        }
      },
      {
        "_index": "test",
        "_type": "your_type",
        "_id": "4",
        "_score": 0.009207102,
        "_source": {
          "id": 4,
          "keyword": "quora"
        }
      },
      {
        "_index": "test",
        "_type": "your_type",
        "_id": "5",
        "_score": 0.0014755741,
        "_source": {
          "id": 5,
          "keyword": "your story"
        }
      }
    ]
  }
}

那么它在幕后做了什么?

  1. 它将您的文档编入索引作为整个关键字(它将整个字符串作为单个标记发出)。我还添加了asciifolding过滤器,因此它标准化字母,即é变为e)和小写过滤器(不区分大小写的搜索)。因此,例如Draft kings被编入索引为draft kings
  2. 现在搜索分析器使用相同的逻辑,除了它的'tokenizer正在发出单词标记,并在其上创建带状符(标记组合),这将匹配您在第一步中索引的关键字。