使用jq或任何其他bash命令将JSON转换为CSV

时间:2020-01-19 02:44:00

标签: json csv jq

我必须在尽可能短的时间内将较大的json文件解析为csv。我有以下示例json文件:

{
    "status": "success",
    "data": {
        "candles": [
            ["2015-12-28T09:15:00+0530", 1386.4, 1388, 1381.05, 1385.1, 788],
            ["2015-12-28T09:16:00+0530", 1385.1, 1389.1, 1383.85, 1385.5, 609],
            ["2015-12-28T09:17:00+0530", 1385.5, 1387, 1385.5, 1385.7, 212],
            ["2015-12-28T09:18:00+0530", 1387, 1387.95, 1385.3, 1387.95, 1208],
            ["2015-12-28T09:19:00+0530", 1387, 1387.55, 1385.6, 1386.25, 716],
            ["2015-12-28T09:20:00+0530", 1386.95, 1389.95, 1386.95, 1389, 727],
            ["2015-12-28T09:21:00+0530", 1389, 1392.95, 1389, 1392.95, 291],
            ["2015-12-28T09:22:00+0530", 1392.95, 1393, 1392, 1392.95, 180],
            ["2015-12-28T09:23:00+0530", 1392.95, 1393, 1392, 1392.15, 1869],
            ["2016-01-01T13:22:00+0530", 1386.4, 1388, 1381.05, 1385.1, 788],
            ["2016-01-01T13:23:00+0530", 1385.1, 1389.1, 1383.85, 1385.5, 613],
            ["2016-01-01T13:24:00+0530", 1385.5, 1387, 1385.5, 1385.7, 212],
            ["2016-01-01T13:25:00+0530", 1387, 1387.95, 1385.3, 1387.95, 1208],
            ["2016-01-01T13:26:00+0530", 1387, 1387.55, 1385.6, 1386.25, 716],
            ["2016-01-01T13:27:00+0530", 1386.95, 1389.95, 1386.95, 1389, 727],
            ["2016-01-01T13:28:00+0530", 1389, 1392.95, 1389, 1392.95, 291],
            ["2016-01-01T13:29:00+0530", 1392.95, 1393, 1392, 1392.95, 180],
            ["2016-01-01T13:30:00+0530", 1392.95, 1393, 1392, 1392.15, 1869]
        ]
    }
}

以上文件(例如data.json)中的数据必须经过过滤,并且蜡烛数据必须保存在csv文件(例如output.csv)中。我无法在csv文件中获取.data.candles数据。预期输出为:

2015-12-28T09:15:00+0530,1386.4,1388.0,1381.05,1385.1,788
2015-12-28T09:16:00+0530,1385.1,1389.1,1383.85,1385.5,609
2015-12-28T09:17:00+0530,1385.5,1387.0,1385.5,1385.7,212
2015-12-28T09:18:00+0530,1387.0,1387.95,1385.3,1387.95,1208
2015-12-28T09:19:00+0530,1387.0,1387.55,1385.6,1386.25,716
2015-12-28T09:20:00+0530,1386.95,1389.95,1386.95,1389.0,727
2015-12-28T09:21:00+0530,1389.0,1392.95,1389.0,1392.95,291
2015-12-28T09:22:00+0530,1392.95,1393.0,1392.0,1392.95,180
2015-12-28T09:23:00+0530,1392.95,1393.0,1392.0,1392.15,1869
2016-01-01T13:22:00+0530,1386.4,1388.0,1381.05,1385.1,788
2016-01-01T13:23:00+0530,1385.1,1389.1,1383.85,1385.5,613
2016-01-01T13:24:00+0530,1385.5,1387.0,1385.5,1385.7,212
2016-01-01T13:25:00+0530,1387.0,1387.95,1385.3,1387.95,1208
2016-01-01T13:26:00+0530,1387.0,1387.55,1385.6,1386.25,716
2016-01-01T13:27:00+0530,1386.95,1389.95,1386.95,1389.0,727
2016-01-01T13:28:00+0530,1389.0,1392.95,1389.0,1392.95,291
2016-01-01T13:29:00+0530,1392.95,1393.0,1392.0,1392.95,180
2016-01-01T13:30:00+0530,1392.95,1393.0,1392.0,1392.15,1869

我可以在python中完成此操作,但由于速度原因,我必须通过jq进行操作。

在此提供一些帮助将不胜感激。

2 个答案:

答案 0 :(得分:0)

一种方法:

$ jq -r '.data.candles[] | @csv' data.json
"2015-12-28T09:15:00+0530",1386.4,1388,1381.05,1385.1,788
"2015-12-28T09:16:00+0530",1385.1,1389.1,1383.85,1385.5,609
"2015-12-28T09:17:00+0530",1385.5,1387,1385.5,1385.7,212
"2015-12-28T09:18:00+0530",1387,1387.95,1385.3,1387.95,1208
"2015-12-28T09:19:00+0530",1387,1387.55,1385.6,1386.25,716
"2015-12-28T09:20:00+0530",1386.95,1389.95,1386.95,1389,727
"2015-12-28T09:21:00+0530",1389,1392.95,1389,1392.95,291
"2015-12-28T09:22:00+0530",1392.95,1393,1392,1392.95,180
"2015-12-28T09:23:00+0530",1392.95,1393,1392,1392.15,1869
"2016-01-01T13:22:00+0530",1386.4,1388,1381.05,1385.1,788
"2016-01-01T13:23:00+0530",1385.1,1389.1,1383.85,1385.5,613
"2016-01-01T13:24:00+0530",1385.5,1387,1385.5,1385.7,212
"2016-01-01T13:25:00+0530",1387,1387.95,1385.3,1387.95,1208
"2016-01-01T13:26:00+0530",1387,1387.55,1385.6,1386.25,716
"2016-01-01T13:27:00+0530",1386.95,1389.95,1386.95,1389,727
"2016-01-01T13:28:00+0530",1389,1392.95,1389,1392.95,291
"2016-01-01T13:29:00+0530",1392.95,1393,1392,1392.95,180
"2016-01-01T13:30:00+0530",1392.95,1393,1392,1392.15,1869

它只将Candles数组的所有元素一次发送到@csv输出格式化程序,并打印出原始结果,而不是将每条输出行都视为JSON字符串({{1} }选项。

要将结果保存到脚本文件中,只需像正常情况一样使用输出重定向:

-r

答案 1 :(得分:0)

我必须在短时间内将一个大json文件解析为csv 可能。

如果文件具有您显示的格式,则可能会发现简单的sed命令比jq快得多。并且,由于结果格式为.csv而不是.json,因此jq提供的所有json验证均不适用于此处。为此使用sed的示例是:

$ sed -n 's/^\s*[[]\([^]]*\).*$/\1/;s/"//g;s/,\s/,/gp' sample.json
2015-12-28T09:15:00+0530,1386.4,1388,1381.05,1385.1,788
2015-12-28T09:16:00+0530,1385.1,1389.1,1383.85,1385.5,609
2015-12-28T09:17:00+0530,1385.5,1387,1385.5,1385.7,212
2015-12-28T09:18:00+0530,1387,1387.95,1385.3,1387.95,1208
2015-12-28T09:19:00+0530,1387,1387.55,1385.6,1386.25,716
2015-12-28T09:20:00+0530,1386.95,1389.95,1386.95,1389,727
2015-12-28T09:21:00+0530,1389,1392.95,1389,1392.95,291
2015-12-28T09:22:00+0530,1392.95,1393,1392,1392.95,180
2015-12-28T09:23:00+0530,1392.95,1393,1392,1392.15,1869
2016-01-01T13:22:00+0530,1386.4,1388,1381.05,1385.1,788
2016-01-01T13:23:00+0530,1385.1,1389.1,1383.85,1385.5,613
2016-01-01T13:24:00+0530,1385.5,1387,1385.5,1385.7,212
2016-01-01T13:25:00+0530,1387,1387.95,1385.3,1387.95,1208
2016-01-01T13:26:00+0530,1387,1387.55,1385.6,1386.25,716
2016-01-01T13:27:00+0530,1386.95,1389.95,1386.95,1389,727
2016-01-01T13:28:00+0530,1389,1392.95,1389,1392.95,291
2016-01-01T13:29:00+0530,1392.95,1393,1392,1392.95,180
2016-01-01T13:30:00+0530,1392.95,1393,1392,1392.15,1869

sed是纯流编辑器,您可能会发现它比使用jq的等效操作快一个数量级。此外,另一个答案中的jq解决方案无法删除日期和时间字段周围的双引号。试试看...