内存错误Python逐行处理大文件

时间:2017-02-24 09:56:09

标签: python memory io out-of-memory

我正在尝试连接模型输出文件,模型运行在5中分解,每个输出对应于其中一个部分运行,由于软件输出到文件的方式,它开始从每个文件的0重新标记输出。我写了一些代码:

1)将所有输出文件连接在一起 2)编辑合并文件以重新标记所有时间步,从0开始并逐渐增加每个时间步。

目的是我可以在一个块中将这个单个文件加载到我的可视化软件中,而不是打开5个不同的窗口。

到目前为止,由于我正在处理的大文件,我的代码会引发内存错误。

我有一些关于如何尝试摆脱它的想法,但我不确定什么会起作用或/并且可能会使事情变得缓慢。

到目前为止

代码:

import os
import time

start_time = time.time()

#create new txt file in smae folder as python script

open("domain.txt","w").close()


"""create concatenated document of all tecplot output files"""
#look into file number 1

for folder in range(1,6,1): 
    folder = str(folder)
    for name in os.listdir(folder):
        if "domain" in name:
            with open(folder+'/'+name) as file_content_list:
                start = ""
                for line in file_content_list:
                    start = start + line# + '\n' 
                with open('domain.txt','a') as f:
                    f.write(start)
              #  print start

#identify file with "domain" in name
#extract contents
#append to the end of the new document with "domain" in folder level above
#once completed, add 1 to the file number previously searched and do again
#keep going until no more files with a higher number exist

""" replace the old timesteps with new timesteps """
#open folder named domain.txt
#Look for lines:
##ZONE T="0.000000000000e+00s", N=87715, E=173528, F=FEPOINT, ET=QUADRILATERAL
##STRANDID=1, SOLUTIONTIME=0.000000000000e+00
# if they are found edits them, otherwise copy the line without alteration

with open("domain.txt", "r") as combined_output:
    start = ""
    start_timestep = 0
    time_increment = 3.154e10
    for line in combined_output:
        if "ZONE" in line:
            start = start + 'ZONE T="' + str(start_timestep) + 's", N=87715, E=173528, F=FEPOINT, ET=QUADRILATERAL' + '\n'
        elif "STRANDID" in line:
            start = start + 'STRANDID=1, SOLUTIONTIME=' + str(start_timestep) + '\n'
            start_timestep = start_timestep + time_increment
        else:
            start = start + line

    with open('domain_final.txt','w') as f:
        f.write(start)

end_time = time.time()
print 'runtime : ', end_time-start_time

os.remove("domain.txt")

到目前为止,我在连接阶段得到了内存错误。

为了改进我可以:

1)在我阅读每个文件时尝试并随时进行更正,但由于它已经无法通过整个文件,我不认为这会产生很大的不同,除了计算时间

2)将所有文件加载到数组中并创建检查函数并在数组上运行该函数:

类似的东西:

def do_correction(line):
        if "ZONE" in line:
            return 'ZONE T="' + str(start_timestep) + 's", N=87715, E=173528, F=FEPOINT, ET=QUADRILATERAL' + '\n'
        elif "STRANDID" in line:
            return 'STRANDID=1, SOLUTIONTIME=' + str(start_timestep) + '\n'
        else:
            return line

3)保持原样,并要求Python指示何时耗尽内存并在该阶段写入文件。任何人都知道这是否可能?

感谢您的帮助

1 个答案:

答案 0 :(得分:2)

在写入输出文件之前,没有必要将每个文件的全部内容读入内存。大文件只会消耗可能的所有可用内存。

一次只需读写一行。也只打开输出文件一次...并选择一个不会被拾取并被视为输入文件本身的名称,否则你冒着将输出文件连接到自身的风险(这不是问题,但可能是你还处理当前目录中的文件) - 如果加载它还没有消耗所有内存。

import os.path

with open('output.txt', 'w') as outfile:
    for folder in range(1,6,1): 
        for name in os.listdir(folder):
            if "domain" in name:
                with open(os.path.join(str(folder), name)) as file_content_list:
                    for line in file_content_list:
                        # perform corrections/modifications to line here
                        outfile.write(line)

现在您可以以面向行的方式处理数据 - 只需在写入输出文件之前修改它。