尝试使用python

时间:2019-03-20 21:51:50

标签: python json apache-spark hadoop pyspark

我正在尝试使用 Python re ”库和python slice的任意组合来更正Kafka使用以下格式在HDFS上给我们提供的格式不正确的JSON字符串Cloudera的Hadoop发行版。

不正确的json:

{"json_data":"{"table":"TEST.FUBAR","op_type":"I","op_ts":"2019-03-14 15:33:50.031848","current_ts":"2019-03-14T15:33:57.479002","pos":"1111","after":{"COL1":949494949494949494,"COL2":99,"COL3":2,"COL4":"            99999","COL5":9999999,"COL6":90,"COL7":42478,"COL8":"I","COL9":null,"COL10":"2019-03-14 15:33:49","COL11":null,"COL12":null,"COL13":null,"COL14":"x222263 ","COL15":"2019-03-14 15:33:49","COL16":"x222263 ","COL17":"2019-03-14 15:33:49","COL18":"2020-09-10 00:00:00","COL19":"A","COL20":"A","COL21":0,"COL22":null,"COL23":"2019-03-14 15:33:47","COL24":2,"COL25":2,"COL26":"R","COL27":"2019-03-14 15:33:49","COL28":"  ","COL29":"PBU67H   ","COL30":"            20000","COL31":2,"COL32":null}}"}

注意::开始标签“ json_data ”附近的双引号: {实际上,唯一需要删除的错误就是“ }} }的结尾(我已经测试了它,没有多余的引号)

有效和正确的json:

{"json_data":{"table":"TEST.FUBAR","op_type":"I","op_ts":"2019-03-14 15:33:50.031848","current_ts":"2019-03-14T15:33:57.479002","pos":"1111","after":{"COL1":949494949494949494,"COL2":99,"COL3":2,"COL4":"            99999","COL5":9999999,"COL6":90,"COL7":42478,"COL8":"I","COL9":null,"COL10":"2019-03-14 15:33:49","COL11":null,"COL12":null,"COL13":null,"COL14":"x222263 ","COL15":"2019-03-14 15:33:49","COL16":"x222263 ","COL17":"2019-03-14 15:33:49","COL18":"2020-09-10 00:00:00","COL19":"A","COL20":"A","COL21":0,"COL22":null,"COL23":"2019-03-14 15:33:47","COL24":2,"COL25":2,"COL26":"R","COL27":"2019-03-14 15:33:49","COL28":"  ","COL29":"PBU67H   ","COL30":"            20000","COL31":2,"COL32":null}}}

我有 40,000至60,000条记录,我需要使用Pyspark每小时读取一次,而基础架构团队则表示需要解决。

是否有使用python读取所有字符串并删除开头和结尾附近的双引号的快速而肮脏的方法?

1 个答案:

答案 0 :(得分:0)

对于提供的字符串,我建议您坚持使用re这样的正则表达式,例如:

'(?<=:|\})(")(?=\}|\{)'

应该做到这一点。由于不需要用双引号引起来的黑体字或冒号,并在开头或结尾的方括号之前。

import re
import json

string = '{"json_data":"{"table":"TEST.FUBAR","op_type":"I","op_ts":"2019-03-14 15:33:50.031848","current_ts":"2019-03-14T15:33:57.479002","pos":"1111","after":{"COL1":949494949494949494,"COL2":99,"COL3":2,"COL4":"            99999","COL5":9999999,"COL6":90,"COL7":42478,"COL8":"I","COL9":null,"COL10":"2019-03-14 15:33:49","COL11":null,"COL12":null,"COL13":null,"COL14":"x222263 ","COL15":"2019-03-14 15:33:49","COL16":"x222263 ","COL17":"2019-03-14 15:33:49","COL18":"2020-09-10 00:00:00","COL19":"A","COL20":"A","COL21":0,"COL22":null,"COL23":"2019-03-14 15:33:47","COL24":2,"COL25":2,"COL26":"R","COL27":"2019-03-14 15:33:49","COL28":"  ","COL29":"PBU67H   ","COL30":"            20000","COL31":2,"COL32":null}"}}'

trimmed_string = re.sub('(?<=:|\})(")(?=\}|\{)', '', string)

data = json.loads(trimmed_string)

结果:

<class 'dict'>  {'json_data': {'table': 'TEST.FUBAR', 'op_type': 'I', 'op_ts': '2019-03-14 15:33:50.031848','current_ts': '2019-03-14T15:33:57.479002', 'pos': '1111', 'after': {'COL1': 949494949494949494, 'COL2': 99, 'COL3': 2, 'COL4': '            99999', 'COL5': 9999999, 'COL6': 90, 'COL7':42478, 'COL8': 'I', 'COL9': None, 'COL10': '2019-03-14 15:33:49', 'COL11': None, 'COL12': None, 'COL13': None, 'COL14': 'x222263 ', 'COL15': '2019-03-14 15:33:49', 'COL16': 'x222263 ', 'COL17': '2019-03-14 15:33:49', 'COL18': '2020-09-10 00:00:00', 'COL19': 'A', 'COL20': 'A', 'COL21': 0, 'COL22': None, 'COL23': '2019-03-14 15:33:47', 'COL24': 2, 'COL25': 2, 'COL26': 'R', 'COL27': '2019-03-14 15:33:49', 'COL28': '  ', 'COL29': 'PBU67H   ', 'COL30': '20000', 'COL31': 2, 'COL32': None}}}