简而言之,问题:如果我对每个存储区的top_hits进行汇总,如何在结果结构中求和特定值?
详细信息:
我有许多记录,每个商店包含一定数量的记录。我想获取每个商店的所有最新记录的总和。
要获取每个商店的最新记录,我创建以下汇总:
"latest_quantity_per_store": {
"aggs": {
"latest_quantity": {
"top_hits": {
"sort": [
{
"datetime": "desc"
},
{
"quantity": "asc"
}
],
"_source": {
"includes": [
"quantity"
]
},
"size": 1
}
}
},
"terms": {
"field": "store",
"size": 10000
}
}
假设我有两个商店,每个商店有两个数量用于两个不同的时间戳。这是该聚合的结果:
"latest_quantity_per_store": {
"doc_count_error_upper_bound": 0,
"sum_other_doc_count": 0,
"buckets": [
{
"key": "01",
"doc_count": 2,
"latest_quantity": {
"hits": {
"total": 2,
"max_score": null,
"hits": [
{
"_index": "inventory-local",
"_type": "doc",
"_id": "O6wFD2UBG8e7nvSU8dYg",
"_score": null,
"_source": {
"quantity": 6
},
"sort": [
1532476800000,
6
]
}
]
}
}
},
{
"key": "02",
"doc_count": 2,
"latest_quantity": {
"hits": {
"total": 2,
"max_score": null,
"hits": [
{
"_index": "inventory-local",
"_type": "doc",
"_id": "pLUFD2UBHBuSGcoH0ZT4",
"_score": null,
"_source": {
"quantity": 11
},
"sort": [
1532476800000,
11
]
}
]
}
}
}
]
}
我现在想在ElasticSearch中进行汇总,以汇总这些存储桶中的总和。在示例数据中,总和超过6和11。我尝试了以下聚合:
"latest_quantity": {
"sum_bucket": {
"buckets_path": "latest_quantity_per_store>latest_quantity>hits>hits>_source>quantity"
}
}
但这会导致此错误:
{
"error": {
"root_cause": [
{
"type": "illegal_argument_exception",
"reason": "No aggregation [hits] found for path [latest_quantity_per_store>latest_quantity>hits>hits>_source>quantity]"
}
],
"type": "search_phase_execution_exception",
"reason": "all shards failed",
"phase": "query",
"grouped": true,
"failed_shards": [
{
"shard": 0,
"index": "inventory-local",
"node": "3z5CqmmAQ-yT2sUCb69DzA",
"reason": {
"type": "illegal_argument_exception",
"reason": "No aggregation [hits] found for path [latest_quantity_per_store>latest_quantity>hits>hits>_source>quantity]"
}
}
]
},
"status": 400
}
什么是正确的聚合方式以某种方式从ElasticSearch中获得数字17?
对于另一个聚合,我做了类似的事情,而不是top_hits聚合。
"average_quantity": {
"sum_bucket": {
"buckets_path": "average_quantity_per_store>average_quantity"
}
},
"average_quantity_per_store": {
"aggs": {
"average_quantity": {
"avg": {
"field": "quantity"
}
}
},
"terms": {
"field": "store",
"size": 10000
}
}
这可以按预期工作,这是结果:
"average_quantity_per_store": {
"doc_count_error_upper_bound": 0,
"sum_other_doc_count": 0,
"buckets": [
{
"key": "01",
"doc_count": 2,
"average_quantity": {
"value": 6
}
},
{
"key": "02",
"doc_count": 2,
"average_quantity": {
"value": 11.5
}
}
]
},
"average_quantity": {
"value": 17.5
}
答案 0 :(得分:2)
有一种方法可以结合使用scripted_metric
聚合和sum_bucket
管道聚合来解决。脚本化的指标聚合有点复杂,但主要思想是允许您提供自己的存储算法并从中吐出一个指标指标。
在您的情况下,您要做的是找出每个商店的最新数量,然后对这些商店数量求和。解决方案如下所示,我将在下面解释一些细节:
POST inventory-local/_search
{
"size": 0,
"aggs": {
"bystore": {
"terms": {
"field": "store.keyword",
"size": 10000
},
"aggs": {
"latest_quantity": {
"scripted_metric": {
"init_script": "params._agg.quantities = new TreeMap()",
"map_script": "params._agg.quantities.put(doc.datetime.date, [doc.datetime.date.millis, doc.quantity.value])",
"combine_script": "return params._agg.quantities.lastEntry().getValue()",
"reduce_script": "def maxkey = 0; def qty = 0; for (a in params._aggs) {def currentKey = a[0]; if (currentKey > maxkey) {maxkey = currentKey; qty = a[1]} } return qty;"
}
}
}
},
"sum_latest_quantities": {
"sum_bucket": {
"buckets_path": "bystore>latest_quantity.value"
}
}
}
}
请注意,要使其正常工作,您需要在script.painless.regex.enabled: true
配置文件中设置elasticsearch.yml
。
init_script
为每个分片创建一个TreeMap
。
map_script
用日期/数量的映射填充每个分片上的TreeMap
。我们在地图中输入的值在单个字符串中包含时间戳和数量。我们稍后在reduce_script
中需要该时间戳。
combine_script
仅取TreeMap
的最后一个值,因为这是给定分片的最新数量。
大部分工作位于reduce_script
中。我们迭代每个分片的所有最新数量,并返回最新的数量。
目前,我们为每家商店提供最新数量。剩下要做的就是使用sum_bucket
管道聚合来求和每个存储量。这样就得到了17的结果。
响应如下:
"aggregations": {
"bystore": {
"doc_count_error_upper_bound": 0,
"sum_other_doc_count": 0,
"buckets": [
{
"key": "01",
"doc_count": 2,
"latest_quantity": {
"value": 6
}
},
{
"key": "02",
"doc_count": 2,
"latest_quantity": {
"value": 11
}
}
]
},
"sum_latest_quantities": {
"value": 17
}
}