我正在尝试使用read_csv函数将3GB文件读入Pandas数据帧并获取错误:内存不足
我在PC上有16GB内存,Ubuntu 16.04和Pandas 0.18版本。我知道我可以提供dtype以方便上传,但我的数据集中有太多列,我想首先加载它然后决定数据类型。
更新:这不是一个重复的问题。 3GB应该很容易适应16GB的机器。
另一个更新:这是Traceback。
Traceback (most recent call last):
File "/home/a/Dropbox/Programming/Python/C and d/main.com.py", line 9, in <module>
preprocessing()
File "/home/a/Dropbox/Programming/Python/C and d/main.com.py", line 5, in preprocessing
df = pd.read_csv(filepath_or_buffer = file_path, sep ='\t', low_memory = False)
File "/usr/lib/python3/dist-packages/pandas/io/parsers.py", line 498, in parser_f
return _read(filepath_or_buffer, kwds)
File "/usr/lib/python3/dist-packages/pandas/io/parsers.py", line 285, in _read
return parser.read()
File "/usr/lib/python3/dist-packages/pandas/io/parsers.py", line 747, in read
ret = self._engine.read(nrows)
File "/usr/lib/python3/dist-packages/pandas/io/parsers.py", line 1197, in read
data = self._reader.read(nrows)
File "pandas/parser.pyx", line 769, in pandas.parser.TextReader.read (pandas/parser.c:8011)
File "pandas/parser.pyx", line 857, in pandas.parser.TextReader._read_rows (pandas/parser.c:9140)
File "pandas/parser.pyx", line 1833, in pandas.parser.raise_parser_error (pandas/parser.c:22649)
pandas.parser.CParserError: Error tokenizing data. C error: out of memory
不幸的是,我不能发布整个数据文件,但它有大约2.5密耳的行和大多数分类(字符串)数据
希望最终更新:
这是我将数据读入数据帧的原始代码:
import pandas as pd
def preprocessing():
file_path = r'/home/a/Downloads/main_query.txt'
df = pd.read_csv(filepath_or_buffer = file_path, sep ='\t', low_memory = False)
上面的代码生成了错误消息,我在上面发布了。
然后我尝试删除low_memory = False
,一切正常,只发出警告:
sys:1: DtypeWarning: Columns (17,20,23,24,33,44,58,118,134,
135,137,142,145,146,147) have mixed types.
Specify dtype option on import or set low_memory=False.
答案 0 :(得分:2)
categorical
dtype when using read_csv()
method中的更新:
pd.read_csv(filename, dtype={'col1': 'category'})
所以你可以尝试使用pandas 0.19.0 RC1
OLD回答:
您可以在块中读取CSV并将其连接到每一步的结果DF:
chunksize = 10**5
df = pd.DataFrame()
for chunk in (pd.read_csv(filename,
dtype={'col1':np.int8, 'col2':np.int32, ...}
chunksize=chunksize)
):
df = pd.concat([df, chunk], ignore_index=True)
注意:engine ='python'
不支持参数dtype
答案 1 :(得分:0)
问题是重复的:
df.info(memory_usage='deep')
或df.memory_usage(deep=True)
,否则熊猫会低估带字符串的数据帧的内存使用量)pd.read_csv(..., dtype={'foo': 'category', 'bar': 'category', ...})
usecols = ['foo', 'bar', 'baz']
nrows=1e5
或另请参阅skiprows=...
)