我正在尝试读取大型csv文件(超过100 GB)
我找到了使用chunksize选项
%%time
import time
filename = "../code/csv/file.csv"
lines_number = sum(1 for line in open(filename))
lines_in_chunk = 100# I don't know what size is better
counter = 0
completed = 0
reader = pd.read_csv(filename, chunksize=lines_in_chunk)
这部分速度更快
但问题是串联
%%time
df = pd.concat(reader,ignore_index=True)
这花费了4个小时,但尚未完成。
内存使用量也在不断增长
是否可以更快,更有效地连接此阅读器文件?