如何制作一个过滤字段最大值的elasticsearch查询?

时间:2015-07-20 01:10:04

标签: elasticsearch

我希望能够查询文本,但也只检索数据中某个整数字段的最大值的结果。我已经阅读了有关聚合和过滤器的文档,但我不太清楚我在寻找什么。

例如,我有一些重复数据被索引,除了整数字段之外是相同的 - 让我们调用这个字段lastseen

因此,作为一个例子,将这些数据放入elasticsearch:

  //  these two the same except "lastseen" field
  curl -XPOST localhost:9200/myindex/myobject -d '{
    "field1": "dinner carrot potato broccoli",
    "field2": "something here",
    "lastseen": 1000
  }'

  curl -XPOST localhost:9200/myindex/myobject -d '{
    "field1": "dinner carrot potato broccoli",
    "field2": "something here",
    "somevalue": 100
  }'

  # and these two the same except "lastseen" field
  curl -XPOST localhost:9200/myindex/myobject -d '{
    "field1": "fish chicken something",
    "field2": "dinner",
    "lastseen": 2000
  }'

  curl -XPOST localhost:9200/myindex/myobject -d '{
    "field1": "fish chicken something",
    "field2": "dinner",
    "lastseen": 200
  }'

如果我查询"dinner"

  curl -XPOST localhost:9200/myindex -d '{  
   "query": {
        "query_string": {
            "query": "dinner"
        }
    }
    }'

我会得到4个结果。我想要一个过滤器,这样我只能获得两个结果 - 只有最大lastseen字段的项目。

显然不对 ,但希望它可以让您了解我的目标:

{
    "query": {
        "query_string": {
            "query": "dinner"
        }
    },
    "filter": {
          "max": "lastseen"
        }

}

结果看起来像:

"hits": [
      {
        ...
        "_source": {
          "field1": "dinner carrot potato broccoli",
          "field2": "something here",
          "lastseen": 1000
        }
      },
      {
        ...
        "_source": {
          "field1": "fish chicken something",
          "field2": "dinner",
          "lastseen": 2000
        }
      } 
   ]

更新1:我尝试创建一个排除lastseen被编入索引的映射。这没用。仍然得到所有4个结果。

curl -XPOST localhost:9200/myindex -d '{  
    "mappings": {
      "myobject": {
        "properties": {
          "lastseen": {
            "type": "long",
            "store": "yes",
            "include_in_all": false
          }
        }
      }
    }
}'

更新2: 我尝试使用agg方案listed here,进行重复数据删除,但它没有用,但更重要的是,我没有办法将其与关键字搜索相结合。

1 个答案:

答案 0 :(得分:4)

不理想,但我认为它可以满足您的需求。

更改field1字段的映射,假设这是您用来定义"复制"的字段。文件,像这样:

PUT /lastseen
{
  "mappings": {
    "test": {
      "properties": {
        "field1": {
          "type": "string",
          "fields": {
            "raw": {
              "type": "string",
              "index": "not_analyzed"
            }
          }
        },
        "field2": {
          "type": "string"
        },
        "lastseen": {
          "type": "long"
        }
      }
    }
  }
}

意思是,你添加一个.raw子字段not_analyzed,这意味着它将按照它的方式编制索引,不进行分析并拆分成术语。这是为了使某些重复的文件能够发现"。

然后,您需要在terms上使用field1.raw聚合(对于重复项)和top_hits子聚合,以便为每个field1值获取单个文档:

GET /lastseen/test/_search
{
  "size": 0,
  "query": {
    "query_string": {
      "query": "dinner"
    }
  },
  "aggs": {
    "field1_unique": {
      "terms": {
        "field": "field1.raw",
        "size": 2
      },
      "aggs": {
        "first_one": {
          "top_hits": {
            "size": 1,
            "sort": [{"lastseen": {"order":"desc"}}]
          }
        }
      }
    }
  }
}

此外,top_hits返回的单个文档是lastseen最高的文档(由"sort": [{"lastseen": {"order":"desc"}}]生成的文件)。

您将获得的结果是这些(在aggregations而非hits下):

   ...
   "aggregations": {
      "field1_unique": {
         "doc_count_error_upper_bound": 0,
         "sum_other_doc_count": 0,
         "buckets": [
            {
               "key": "dinner carrot potato broccoli",
               "doc_count": 2,
               "first_one": {
                  "hits": {
                     "total": 2,
                     "max_score": null,
                     "hits": [
                        {
                           "_index": "lastseen",
                           "_type": "test",
                           "_id": "AU60ZObtjKWeJgeyudI-",
                           "_score": null,
                           "_source": {
                              "field1": "dinner carrot potato broccoli",
                              "field2": "something here",
                              "lastseen": 1000
                           },
                           "sort": [
                              1000
                           ]
                        }
                     ]
                  }
               }
            },
            {
               "key": "fish chicken something",
               "doc_count": 2,
               "first_one": {
                  "hits": {
                     "total": 2,
                     "max_score": null,
                     "hits": [
                        {
                           "_index": "lastseen",
                           "_type": "test",
                           "_id": "AU60ZObtjKWeJgeyudJA",
                           "_score": null,
                           "_source": {
                              "field1": "fish chicken something",
                              "field2": "dinner",
                              "lastseen": 2000
                           },
                           "sort": [
                              2000
                           ]
                        }
                     ]
                  }
               }
            }
         ]
      }
   }