我正在将25GB的大型csv文件读取到pandas.DataFrame中。我的电脑规格是:
读取该文件需要花费很长时间,例如20分钟。在代码方面,有什么建议可以做得更好?
*注意:因为我要和另一个人加入(合并),所以整个需要这个DF。
答案 0 :(得分:1)
您可以使用dask.dataframe:
import dask.dataframe as dd # import dask.dataframe
df = dd.read_csv('filename.csv') # read csv
或者您可以使用chunking:
def chunk_processing(): # define a function that you will use on chunks
## Do Something # your function code here
chunk_list = [] # create an empty list to hold chunks
chunksize = 10 ** 6 # set chunk size
for chunk in pd.read_csv('filename.csv', chunksize=chunksize): # read in csv in chunks of chunksize
processed_chunk = chunk_processing(chunk) # process the chunks with chunk_processing() function
chunk_list.append(processed_chunk) # append the chunks to a list
df_concat = pd.concat(chunk_list) # concatenate the list to a dataframe