弹性搜索平均聚合

时间:2018-01-29 01:01:15

标签: elasticsearch kibana

以下是我在Kibana中的代码示例:

get sample/_search
{
  "query": {
    "match_all": {}
  }
}

为了隐私起见,我已经为了lat和lon付了*,这些数字无论如何都不会做出任何改变。这是我收到的结果:

 {
  "took": 53,
  "timed_out": false,
  "num_reduce_phases": 3,
  "_shards": {
    "total": 1293,
    "successful": 1293,
    "skipped": 0,
    "failed": 0
      },
  "hits": {
    "total": 10937405,
    "max_score": 1,
    "hits": [
      {
        "_index": "sample_2016-08-08",
        "_type": "sample",
        "_id": "18-26-55-1470693602000",
        "_score": 1,
        "_source": {
          "type": "a",
          "company": 18,
          "value": 0,
          "equipment": 26,
          "state": "",
          "location": {
            "lat": *,
            "lon": *
          },
          "sensor": 55,
          "sampled_on": 1470693602000
        }
      },
      {
        "_index": "sample_2016-08-08",
        "_type": "sample",
        "_id": "18-26-56-1470693602000",
        "_score": 1,
        "_source": {
          "type": "a",
          "company": 18,
          "value": 7.436,
          "equipment": 26,
          "state": "",
          "location": {
            "lat": *,
            "lon": *
          },
          "sensor": 56,
          "sampled_on": 1470693602000
        }
      },
      {
        "_index": "sample_2016-08-08",
        "_type": "sample",
        "_id": "18-26-57-1470693602000",
        "_score": 1,
        "_source": {
          "type": "a",
          "company": 18,
          "value": 148.538,
          "equipment": 26,
          "state": "",
          "location": {
            "lat": *,
            "lon": *
          },
          "sensor": 57,
          "sampled_on": 1470693602000
        }
      },
      {
        "_index": "sample_2016-08-08",
        "_type": "sample",
        "_id": "18-26-59-1470693602000",
        "_score": 1,
        "_source": {
          "type": "a",
          "company": 18,
          "value": 0.196,
          "equipment": 26,
          "state": "",
          "location": {
            "lat": *,
            "lon": *
          },
          "sensor": 59,
          "sampled_on": 1470693602000
        }
      },
      {
        "_index": "sample_2016-08-08",
        "_type": "sample",
        "_id": "18-26-63-1470693708000",
        "_score": 1,
        "_source": {
          "type": "a",
          "company": 18,
          "value": 31.3,
          "equipment": 26,
          "state": "",
          "location": {
            "lat": *,
            "lon": *
          },
          "sensor": 63,
          "sampled_on": 1470693708000
        }
      },
      {
        "_index": "sample_2016-08-08",
        "_type": "sample",
        "_id": "18-26-65-1470693708000",
        "_score": 1,
        "_source": {
          "type": "a",
          "company": 18,
          "value": 25.6,
          "equipment": 26,
          "state": "",
          "location": {
            "lat": *,
            "lon": *
          },
          "sensor": 65,
          "sampled_on": 1470693708000
        }
      },
      {
        "_index": "sample_2016-08-08",
        "_type": "sample",
        "_id": "18-26-62-1470693708000",
        "_score": 1,
        "_source": {
          "type": "a",
          "company": 18,
          "value": 0.255,
          "equipment": 26,
          "state": "",
          "location": {
            "lat": *,
            "lon": *
          },
          "sensor": 62,
          "sampled_on": 1470693708000
        }
      },
      {
        "_index": "sample_2016-08-08",
        "_type": "sample",
        "_id": "18-26-63-1470693809000",
        "_score": 1,
        "_source": {
          "type": "a",
          "company": 18,
          "value": 31.3,
          "equipment": 26,
          "state": "",
          "location": {
            "lat": *,
            "lon": *
          },
          "sensor": 63,
          "sampled_on": 1470693809000
        }
      },
      {
        "_index": "sample_2016-08-08",
        "_type": "sample",
        "_id": "18-26-64-1470693809000",
        "_score": 1,
        "_source": {
          "type": "a",
          "company": 18,
          "value": 25.9,
          "equipment": 26,
          "state": "",
          "location": {
            "lat": *,
            "lon": *
          },
          "sensor": 64,
          "sampled_on": 1470693809000
        }
      },
      {
        "_index": "sample_2016-08-08",
        "_type": "sample",
        "_id": "18-26-65-1470693809000",
        "_score": 1,
        "_source": {
          "type": "a",
          "company": 18,
          "value": 25.6,
          "equipment": 26,
          "state": "",
          "location": {
            "lat": *,
            "lon": *
          },
          "sensor": 65,
          "sampled_on": 1470693809000
        }
      }
    ]
  }
}

" sampled_on"代表时间。每个"传感器"发送一个新的"值"喜欢每隔几秒钟。我想收到以下结果,我希望它在1个查询中: 对于每个"传感器"显示平均值"值"它从开始到现在以30分钟的间隔报告。 这是一个例子:

传感器1

最老的一个 ...
...
2009年1月14日
下午2:30 - 下午3:00
平均值:5
...
2012年2月18日
凌晨3:10 - 凌晨3:40 平均值:4
...
最新的一个

传感器2

最老的一个 ...
...
2011年1月1日
下午5:30 - 下午6:00
平均值:5
...
2012年2月3日
上午7:20 - 上午7:50
平均值:4
...
最新的一个

感谢您的帮助!

1 个答案:

答案 0 :(得分:0)

这是我在Kibana中制作的示例查询,而不是剪切请求。这意味着某些部分是不必要的,我不得不重命名一些字段,因为我在自己的记录中提出了类似的请求。

{
  "size": 0,
  "query": {
    "match_all": {}
  },
  "_source": {
    "excludes": []
  },
  "aggs": {
    "2": {
      "terms": {
        "field": "sensor",
        "size": 500,
        "order": {
          "1": "desc"
        }
      },
      "aggs": {
        "1": {
          "avg": {
            "field": "value"
          }
        }
      }
    }
  }
}

请注意,'aggs.2.terms.size'设置为500,如果您有更多传感器,可能会剪切一些结果。