我想基于时间戳对csv文件中的值进行排序并将其打印到另一个文件,但是对于具有多行的文件,python的内存不足(读取文件时)。 我可以做些什么使它更有效吗?还是应该使用csv.DictReader之后的其他方法?
import csv, sys
import datetime
from pathlib import Path
localPath = "C:/MyPath"
# data variables
dataDir = localPath + "data/" dataExtension = ".dat"
pathlistData = Path(dataDir).glob('**/*'+ dataExtension)
# Generated filename as date, Format: YYYY-DDDTHH
generatedDataDir = localPath + "result/"
#generatedExtension = ".dat"
errorlog = 'errorlog.csv'
fieldnames = ['TimeStamp', 'A', 'B', 'C', 'C', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L','M', 'N', 'O', 'P', 'Q', 'R']
for dataPath in pathlistData:
#stores our data in a dictionary
dataDictionary = {}
dataFileName = str(dataPath).replace('\\', '/')
newFilePathString = dataFileName.replace(dataDir,generatedDataDir)
with open(dataPath, 'r') as readFile:
print(str("Reading data from " + dataFileName))
keysAsDate = []#[datetime.datetime.strptime(ts, "%Y-%m-%d") for ts in timestamps]
reader = csv.DictReader(readFile, fieldnames=fieldnames)
for row in reader:
try:
timestamp = row['TimeStamp']
#create a key based on the timestamp
timestampKey = datetime.datetime.strptime(timestamp[0:16], "%Y-%jT%H:%M:%S")
#save this key as a date, used later for sorting
keysAsDate.append(timestampKey)
#save the row data in a dictionary
dataDictionary[timestampKey] = row
except csv.Error as e:
sys.exit('file %s, line %d: %s' % (errorlog, reader.line_num, e))
#sort the keys
keysAsDate.sort()
readFile.close()
with open(newFilePathString, 'w') as writeFile:
writer = csv.DictWriter(writeFile, fieldnames=fieldnames, lineterminator='\n')
print(str("Writing data to " + newFilePathString))
#loop over the sorted keys
for idx in range(0, len(keysAsDate)):
#get the row from our data dictionary
writeRow = dataDictionary[keysAsDate[idx]]
#print(dataDictionary[keysAsDate[key]])
writer.writerow(writeRow)
if idx%30000 == 0:
print("Writing to new file: " + str(int(idx/len(keysAsDate) * 100)) + "%")
print(str("Finished writing to file: " + newFilePathString))
writeFile.close()
更新:我使用了大熊猫,将大文件分成了较小的块,可以分别对其进行排序。 如果我一个接一个地附加文件,这目前还不能解决值放得过大的问题。
for dataPath in pathlistData:
dataFileName = str(dataPath).replace('\\', '/')
#newFilePathString = dataFileName.replace(dataDir,generatedDataDir)
print(str("Reading data from " + dataFileName))
#divide our large data frame into smaller data frame chunks
#so we can sort the content in memory
for df_chunk in pd.read_csv(dataFileName, header = None, chunksize = chunk_size, names = fieldnames):
dataDictionary = {}
dataDictionary.clear()
for idx in range(0, chunk_size):
#print(df_chunk[idx:idx+1])
row = df_chunk[idx:idx+1]
dataDictionary = df_chunk.sort_values(['TimeStamp'], ascending=True)
firstTimeStampInChunk = dataDictionary[0:1]['TimeStamp']
#print("first: " + firstTimeStampInChunk)
lastTimeStampInChunk = dataDictionary[chunk_size-1:chunk_size]['TimeStamp']
#print("last: " + lastTimeStampInChunk)
timestampStr = str(firstTimeStampInChunk)[chunk_shift:timestamp_size+chunk_shift] + str(lastTimeStampInChunk)[chunk_shift:timestamp_size+chunk_shift]
tempFilePathString = str(timestampStr + dataExtension).replace(':', '_').replace('\\', '/')
dataDictionary.to_csv('temp/'+tempFilePathString, header = None, index=False)
# data variables
tempDataDir = localPath + "temp/"
tempPathlistData = Path(tempDataDir).glob('**/*'+ dataExtension)
tempPathList = list(tempPathlistData)
我解决随机值问题的算法理论(无代码)是:
第1步-分成较小的块,其中“ chunk_size =要处理的最大行数除以2”
第2步-按顺序循环浏览文件,一次合并两个文件,然后将它们排序在一起,然后再次拆分,以使文件不超过chunk_size。
第3步-向后循环,一次合并两个文件并对它们进行排序,然后再次拆分,以使文件不大于chunk_size。
第4步-现在,所有放错了地方的低值都应该传播到最低的部分,而所有放错了地方的高值都应该传播到最高的部分。依次附加文件!
缺点;时间上的复杂性一点都不可取,如果我没记错的话,基本上是O(N ^ 2)
答案 0 :(得分:1)
尝试使用熊猫csv阅读器,这非常有效。 (https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html)。您可以使用https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.to_dict.html
在熊猫和字典之间轻松转换答案 1 :(得分:0)
您解释了内存中排序对您不起作用,因为文件大小超出了内存大小。至少有两种方法可以解决此问题。两者都依靠做更多的文件I / O。
tell()
,并在内存中仅保留时间戳和文件偏移量。按时间戳对偏移量进行排序。在遍历已排序的元组时,反复调用seek()
,对记录进行随机读取,然后将其附加到输出文件中。/usr/bin/sort
进行外部合并排序。 Windows用户可以从https://git-scm.com/download/获取coreutils GNU排序。使用subprocess模块来调用它。