我的问题是:如何以表格格式输出JQ,用0?
替换缺席值因此,JQ的输入是以下Elastic Search JSON响应:
{"aggregations": {
"overall": {
"buckets": [
{
"key": "2018-01-18T00:00:00.000Z-2018-01-25T19:33:16.010Z",
"from_as_string": "2018-01-18T00:00:00.000Z",
"to": 1516908796010,
"to_as_string": "2018-01-25T19:33:16.010Z",
"doc_count": 155569,
"agg_per_name": {
"doc_count_error_upper_bound": 0,
"sum_other_doc_count": 0,
"buckets": [
{
"key": "ASSET-DD583",
"doc_count": 3016,
"totalMaxUptime_perDays": {
"buckets": [
{
"key_as_string": "2018-01-22T00:00:00.000Z",
"key": 1516579200000,
"doc_count": 161,
"totalMaxUptime": {
"value": 77598
}
},
{
"key_as_string": "2018-01-23T00:00:00.000Z",
"key": 1516665600000,
"doc_count": 251,
"totalMaxUptime": {
"value": 80789
}
},
{
"key_as_string": "2018-01-24T00:00:00.000Z",
"key": 1516752000000,
"doc_count": 192,
"totalMaxUptime": {
"value": 56885
}
},
{
"key_as_string": "2018-01-25T00:00:00.000Z",
"key": 1516838400000,
"doc_count": 2088,
"totalMaxUptime": {
"value": 7392705
}
}
]
}
},
{
"key": "ASSET-DD568",
"doc_count": 2990,
"totalMaxUptime_perDays": {
"buckets": [
{
"key_as_string": "2018-01-18T00:00:00.000Z",
"key": 1516233600000,
"doc_count": 106,
"totalMaxUptime": {
"value": 31241
}
},
{
"key_as_string": "2018-01-19T00:00:00.000Z",
"key": 1516320000000,
"doc_count": 241,
"totalMaxUptime": {
"value": 2952565
}
},
{
"key_as_string": "2018-01-20T00:00:00.000Z",
"key": 1516406400000,
"doc_count": 326,
"totalMaxUptime": {
"value": 2698235
}
},
{
"key_as_string": "2018-01-21T00:00:00.000Z",
"key": 1516492800000,
"doc_count": 214,
"totalMaxUptime": {
"value": 85436
}
},
{
"key_as_string": "2018-01-22T00:00:00.000Z",
"key": 1516579200000,
"doc_count": 279,
"totalMaxUptime": {
"value": 83201
}
},
{
"key_as_string": "2018-01-23T00:00:00.000Z",
"key": 1516665600000,
"doc_count": 50,
"totalMaxUptime": {
"value": 96467
}
},
{
"key_as_string": "2018-01-24T00:00:00.000Z",
"key": 1516752000000,
"doc_count": 5,
"totalMaxUptime": {
"value": 903
}
},
{
"key_as_string": "2018-01-25T00:00:00.000Z",
"key": 1516838400000,
"doc_count": 1769,
"totalMaxUptime": {
"value": 12337946
}
}
]
}
},
{
"key": "ASSET-42631",
"doc_count": 2899,
"totalMaxUptime_perDays": {
"buckets": [
{
"key_as_string": "2018-01-18T00:00:00.000Z",
"key": 1516233600000,
"doc_count": 132,
"totalMaxUptime": {
"value": 39054
}
},
{
"key_as_string": "2018-01-19T00:00:00.000Z",
"key": 1516320000000,
"doc_count": 172,
"totalMaxUptime": {
"value": 47634
}
},
{
"key_as_string": "2018-01-20T00:00:00.000Z",
"key": 1516406400000,
"doc_count": 214,
"totalMaxUptime": {
"value": 68264
}
},
{
"key_as_string": "2018-01-21T00:00:00.000Z",
"key": 1516492800000,
"doc_count": 220,
"totalMaxUptime": {
"value": 66243
}
},
{
"key_as_string": "2018-01-25T00:00:00.000Z",
"key": 1516838400000,
"doc_count": 128,
"totalMaxUptime": {
"value": 47660
}
}
]
}
}
]
}
}
]
}
}
}
这个JSON有一些固有的属性:
对于给定样本,JQ的期望输出是一个表格,其中在水平上您具有来自key_as_string的日期(在这种情况下从18.01.2018到25.01.2018)并且在垂直上具有所有资产键(即ASSET-) DD583,ASSET-DD568等)。对于每个相应的日期,该表填充了totalMaxUptime.value,如果结果中没有日期,则应该改为使用“0”值:
XXXXXXXXXXX, 2018-01-18, 2018-01-19, 2018-01-20, 2018-01-21, 2018-01-22, 2018-01-23, 2018-01-24, 2018-01-25
ASSET-DD583, 0, 0, 0, 0, 77598, 80789, 56885, 7392705
ASSET-DD568, 31241, 2952565, 2698235, 85436, 83201, 96467, 903, 12337946
ASSET-42631, 39054, 47634, 68264, 66243, 0, 0, 0, 47660
编辑1:
这是我走了多远:
input.json | jq '.aggregations.overall.buckets[0].agg_per_name.buckets[] | .key + ", " + (.totalMaxUptime_perDays.buckets[] | .key_as_string + ", " + (.totalMaxUptime.value | tostring))' | sed 's/"//g' | sed 's/T00:00:00.000Z//g'> uptime.csv
产生这种输出:
ASSET-DD583, 2018-01-22, 77598
ASSET-DD583, 2018-01-23, 80789
ASSET-DD583, 2018-01-24, 56885
ASSET-DD583, 2018-01-25, 7392705
...............
答案 0 :(得分:4)
在下文中,我使用了@tsv
,因此可以更轻松地将输出视为表格,但您可能希望使用@csv
。
这里棘手的部分是将0放在正确的位置。创建JSON“字典”(即JSON对象)使其变得简单。这里,normalize
利用了jq将遵循将键添加到对象的顺序这一事实。
def dates:
["2018-01-18", "2018-01-19", "2018-01-20", "2018-01-21", "2018-01-22", "2018-01-23", "2018-01-24", "2018-01-25"];
def normalize:
. as $in
| reduce dates[] as $k ({}; .[$k] = ($in[$k] // 0));
(["Asset"] + dates),
(.aggregations.overall.buckets[].agg_per_name.buckets[]
| .key as $asset
| .totalMaxUptime_perDays.buckets
| map( { (.key_as_string | sub("T.*";"") ): .totalMaxUptime.value } )
| add
| normalize
| [$asset] + [.[]]
)
| @tsv
您可能希望修改上述内容,以便根据数据计算dates
。
Asset 2018-01-18 2018-01-19 2018-01-20 2018-01-21 2018-01-22 2018-01-23 2018-01-24 2018-01-25
ASSET-DD583 0 0 0 0 77598 80789 56885 7392705
ASSET-DD568 31241 2952565 2698235 85436 83201 96467 903 12337946
ASSET-42631 39054 47634 68264 66243 0 0 0 47660
编辑:已添加$in[$k] // 0
周围的括号。
答案 1 :(得分:1)
解决您问题的部分解决方案。
如果使用@csv
,则可以将数组的值放在同一行。
例如,让我们说你有
{
"a": [1,2,3],
"b": [
{
"x": 10
},
{
"x": 20
},
{
"x": 30
}
]
}
要获得1,2,3
,您应该使用jq '.a | @csv'
要获得10,20,30
,您应该使用jq '[.b[].x] | @csv'
希望这有帮助!
答案 2 :(得分:0)
尝试以下操作:
cat input.json
| jq '.aggregations.overall.buckets[0].agg_per_name.buckets[] |
.key + ", " + (.totalMaxUptime_perDays.buckets[] |
.key_as_string + ", " + (.totalMaxUptime.value | tostring))' |column -t -s,