如何解决从错误的json格式解码的问题

时间:2019-01-10 03:45:51

标签: python json python-3.x

每个人。需要帮助来打开和读取文件。

得到了这个txt文件-https://yadi.sk/i/1TH7_SYfLss0JQ

这是一本字典

{“ id0”:“ url0”,“ id1”:“ url1”,...,“ idn”:“ urln”}

但是它是使用json写入txt文件中的。

#This is how I dump the data into a txt    
json.dump(after,open(os.path.join(os.getcwd(), 'before_log.txt'), 'a')) 

因此,文件结构为 {“ id0”:“ url0”,“ id1”:“ url1”,...,“ idn”:“ urln”} {“ id2”:“ url2”,“ id3”:“ url3”,..., “ id4”:“ url4”} {“ id5”:“ url5”,“ id6”:“ url6”,...,“ id7”:“ url7”}

这都是一个字符串。...

我需要打开它并检查重复的ID,然后删除并再次保存。

但是获取-json.loads显示ValueError:额外数据

尝试了这些: How to read line-delimited JSON from large file (line by line) Python json.loads shows ValueError: Extra data json.decoder.JSONDecodeError: Extra data: line 2 column 1 (char 190)

但是仍然在其他地方出现该错误。

现在我能做到:

with open('111111111.txt', 'r') as log:
    before_log = log.read()
before_log = before_log.replace('}{',', ').split(', ')

mu_dic = []
for i in before_log:
    mu_dic.append(i)

这消除了连续出现多个{} {} {}个字典/ json的问题。

也许有更好的方法可以做到这一点?

P.S。这是文件的制作方式:

json.dump(after,open(os.path.join(os.getcwd(), 'before_log.txt'), 'a'))    

2 个答案:

答案 0 :(得分:0)

文件结构和实际json格式之间的基本区别是缺少逗号,并且这些行未包含在[中。因此,使用以下代码段即可实现相同的目的

with open('json_file.txt') as f:
    # Read complete file
    a = (f.read())

    # Convert into single line string
    b = ''.join(a.splitlines())

    # Add , after each object
    b = b.replace("}", "},")

    # Add opening and closing parentheses and ignore last comma added in prev step
    b = '[' + b[:-1] + ']'

x = json.loads(b)

答案 1 :(得分:0)

您的文件大小为950万,因此您需要花一些时间才能打开它并对其进行手动调试。 因此,使用 func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [UIImagePickerController.InfoKey : Any]) { picker.dismiss(animated: true) { switch mediaType { case kUTTypeMovie: guard info[UIImagePickerController.InfoKey.mediaType] != nil, let url = info[UIImagePickerController.InfoKey.mediaURL] as? URL else { return } let asset = AVAsset(url: url) guard let img = self.generateThumbnail(for: asset) else { print("Error: Thumbnail can be generated.") return } print("Image Size: \(img.size)") break default: break } } } head工具(通常可在任何Gnu / Linux发行版中找到),您将看到:

tail

因此,第一个猜测是您的文件是一系列# You can use Python as well to read chunks from your file # and see the nature of it and what it's causing a decode problem # but i prefer head & tail because they're ready to be used :-D $> head -c 217 111111111.txt {"1933252590737725178": "https://instagram.fiev2-1.fna.fbcdn.net/vp/094927bbfd432db6101521c180221485/5CC0EBDD/t51.2885-15/e35/46950935_320097112159700_7380137222718265154_n.jpg?_nc_ht=instagram.fiev2-1.fna.fbcdn.net", $> tail -c 219 111111111.txt , "1752899319051523723": "https://instagram.fiev2-1.fna.fbcdn.net/vp/a3f28e0a82a8772c6c64d4b0f264496a/5CCB7236/t51.2885-15/e35/30084016_2051123655168027_7324093741436764160_n.jpg?_nc_ht=instagram.fiev2-1.fna.fbcdn.net"} $> head -c 294879 111111111.txt | tail -c 12 net"}{"19332 数据的格式错误,最好的猜测是将JSON}{分开以进行进一步的操作。

因此,这里有一个示例,说明如何使用\n解决问题:

Python

输出:

import json

input_file = '111111111.txt'
output_file = 'new_file.txt'

data = ''
with open(input_file, mode='r', encoding='utf8') as f_file:
    # this with statement part can be replaced by 
    # using sed under your OS like this example:
    # sed -i 's/}{/}\n{/g' 111111111.txt
    data = f_file.read()
    data = data.replace('}{', '}\n{')


seen, total_keys, to_write = set(), 0, {}
# split the lines of the in memory data
for elm in data.split('\n'):
    # convert the line to a valid Python dict
    converted = json.loads(elm)
    # loop over the keys
    for key, value in converted.items():
        total_keys += 1
        # if the key is not seen then add it for further manipulations
        # else ignore it
        if key not in seen:
            seen.add(key)
            to_write.update({key: value})

# write the dict's keys & values into a new file as a JSON format
with open(output_file, mode='a+', encoding='utf8') as out_file:
    out_file.write(json.dumps(to_write) + '\n')

print(
    'found duplicated key(s): {seen} from {total}'.format(
        seen=total_keys - len(seen),
        total=total_keys
    )
)

最后,输出文件将是有效的found duplicated key(s): 43836 from 45367 文件,重复的键及其值将被删除。