熊猫无法加载​​数据,csv编码之谜

时间:2016-08-02 18:53:52

标签: python pandas chardet

我正在尝试将数据集加载到pandas中,似乎无法通过第1步。我是新的,请原谅,如果这很明显,我已经搜索了以前的主题,但没有找到答案。数据主要是中文字符,这可能是个问题。

.csv非常大,可以在这里找到:http://weiboscope.jmsc.hku.hk/datazip/ 我正在尝试第1周。

在下面的代码中,我确定了我尝试的3种解码类型,包括尝试查看使用的编码

import pandas
import chardet
import os


#this is what I tried to start
    data = pandas.read_csv('week1.csv', encoding="utf-8")

    #spits out error: UnicodeDecodeError: 'utf-8' codec can't decode byte 0x9a in position 69: invalid start byte

#Code to check encoding -- this spits out ascii
bytes = min(32, os.path.getsize('week1.csv'))
raw = open('week1.csv', 'rb').read(bytes)
chardet.detect(raw)

#so i tried this! it also fails, which isn't that surprising since i don't know how you'd do chinese chars in ascii anyway
data = pandas.read_csv('week1.csv', encoding="ascii")

#spits out error: UnicodeDecodeError: 'ascii' codec can't decode byte 0xe6 in position 0: ordinal not in range(128)

#for god knows what reason this allows me to load data into pandas, but definitely not correct encoding because when I print out first 5 lines its gibberish instead of Chinese chars
data = pandas.read_csv('week1.csv', encoding="latin1")

非常感谢任何帮助!

编辑:@Kristof提供的答案实际上确实有效,我的同事昨天整理的程序也是如此:

import csv
import pandas as pd

def clean_weiboscope(file, nrows=0):
    res = []
    with open(file, 'r', encoding='utf-8', errors='ignore') as f:
        reader = csv.reader(f)
        for i, row in enumerate(f):
            row = row.replace('\n', '')
            if nrows > 0 and i > nrows:
                break
            if i == 0:
                headers = row.split(',')
            else:
                res.append(tuple(row.split(',')))
    df = pd.DataFrame(res)
    return df

my_df = clean_weiboscope('week1.csv', nrows=0)

我还想为未来的搜索者添加这是2012年的Weiboscope开放数据。

1 个答案:

答案 0 :(得分:2)

输入文件似乎有些问题。整个编码错误。

>

示例(块读取代码的source):

in_filename = 'week1.csv'
out_filename = 'repaired.csv'

from functools import partial
chunksize = 100*1024*1024 # read 100MB at a time

# Decode with UTF-8 and replace errors with "?"
with open(in_filename, 'rb') as in_file:
    with open(out_filename, 'w') as out_file:
        for byte_fragment in iter(partial(in_file.read, chunksize), b''):
            out_file.write(byte_fragment.decode(encoding='utf_8', errors='replace'))

# Now read the repaired file into a dataframe
import pandas as pd
df = pd.read_csv(out_filename)

df.shape
>> (4790108, 11)

df.head()

sample output