Python垃圾收集器的问题?

时间:2012-09-17 00:55:02

标签: python numpy h5py

我有一个简单的程序,它读取一个包含几百万行的大文件,解析每一行(numpy array)并转换为一个双精度数组(python array),然后写入{{1 }}。我重复这个循环多天。读完每个文件后,我删除所有对象并调用垃圾收集器。当我运行程序时,第一天被解析没有任何错误,但在第二天我得到hdf5 file。我监控了我的程序的内存使用情况,在解析的第一天,内存使用量大约是 1.5 GB 。第一天解析完成后,内存使用量将降至 50 MB 。现在,当第二天开始,我尝试从文件中读取行,我得到MemoryError。以下是该计划的输出。

MemoryError

我非常确定Windows系统任务管理器会将此内存使用情况显示为 50 MB 。看起来Python的垃圾收集器或内存管理器没有正确计算可用内存。应该有很多空闲记忆,但它认为还不够。

有什么想法吗?

修改

在此处添加我的代码

我将放置部分代码。我是python的新手,请原谅我的python编码风格。

模块1

source file extracted at C:\rfadump\au\2012.08.07.txt
parsing started
current time: 2012-09-16 22:40:16.829000
500000 lines parsed
1000000 lines parsed
1500000 lines parsed
2000000 lines parsed
2500000 lines parsed
3000000 lines parsed
3500000 lines parsed
4000000 lines parsed
4500000 lines parsed
5000000 lines parsed
parsing done.
end time is 2012-09-16 23:34:19.931000
total time elapsed 0:54:03.102000
repacking file
done
> s:\users\aaj\projects\pythonhf\rfadumptohdf.py(132)generateFiles()
-> while single_date <= self.end_date:
(Pdb) c
*** 2012-08-08 ***
source file extracted at C:\rfadump\au\2012.08.08.txt
cought an exception while generating file for day 2012-08-08.
Traceback (most recent call last):
  File "rfaDumpToHDF.py", line 175, in generateFile
    lines = self.rawfile.read().split('|\n')
MemoryError

模块2 - taqdb - 将已解析的数据存储在数组中

def generateFile(self, current_date):
    try:
        print "*** %s ***" % current_date.strftime("%Y-%m-%d")
        weekday=current_date.weekday()
        if weekday >= 5:
            print "skipping weekend"
            return
        self.taqdb = taqDB(self.index, self.offset)
        cache_filename = os.path.join(self.cache_dir,current_date.strftime("%Y.%m.%d.h5"))
        outputFile = config.hdf5.filePath(self.index, date=current_date)
        print "cache file: ", cache_filename
        print "output file: ", outputFile

        tempdir = "C:\\rfadump\\"+self.region+"\\"  
        input_filename = tempdir + filename
        print "source file extracted at %s " % input_filename

        ## universe
        reader = rfaTextToTAQ.rfaTextToTAQ(self.tickobj)  ## PARSER
        count = 0
        self.rawfile = open(input_filename, 'r')
        lines = self.rawfile.read().split('|\n')
        total_lines = len(lines)
        self.rawfile.close()
        del self.rawfile
        print "parsing started"
        start_time = dt.datetime.now()
        print "current time: %s" % start_time
        #while(len(lines) > 0):
        while(count < total_lines):
            #line = lines.pop(0) ## This slows down processing
            result = reader.parseline(lines[count]+"|")
            count += 1
            if(count % 500000 == 0):
                print "%d lines parsed" %(count)
            if(result == None): 
                continue
            ric, timestamp, quotes, trades, levelsUpdated, tradeupdate = result
            if(len(levelsUpdated) == 0 and tradeupdate == False):
                continue
            self.taqdb.insert(result)

        ## write to hdf5 TODO
        writer = h5Writer.h5Writer(cache_filename, self.tickobj)
        writer.write(self.taqdb.groups)
        writer.close()

        del lines
        del self.taqdb, self.tickobj
        ##########################################################
        print "parsing done." 
        end_time = dt.datetime.now()
        print "end time is %s" % end_time
        print "total time elapsed %s" % (end_time - start_time)

        defragger = hdf.HDF5Defragmenter()
        defragger.Defrag(cache_filename,outputFile)
        del defragger
        print "done"
        gc.collect(2)
    except:
        print "cought an exception while generating file for day %s." % current_date.strftime("%Y-%m-%d")
        tb = traceback.format_exc()
        print tb

模块3-解析器

class taqDB:
  def __init__(self, index, offset):
    self.index = index
    self.tickcfg = config.hdf5.getTickConfig(index)
    self.offset = offset
    self.groups = {}

  def getGroup(self,ric):
    if (self.groups.has_key(ric) == False):
        self.groups[ric] = {}
    return self.groups[ric]

  def getOrderbookArray(self, ric, group):
    datasetname = orderBookName
    prodtype = self.tickcfg.getProdType(ric)
    if(prodtype == ProdType.INDEX):
        return
    orderbookArrayShape = self.tickcfg.getOrderBookArrayShape(prodtype)
    if(group.has_key(datasetname) == False):
        group[datasetname] = array.array("d")
        orderbookArray = self.tickcfg.getOrderBookArray(prodtype)
        return orderbookArray
    else:
        orderbookArray = group[datasetname]
        if(len(orderbookArray) == 0):
            return self.tickcfg.getOrderBookArray(prodtype)
        lastOrderbook = orderbookArray[-orderbookArrayShape[1]:]
        return np.array([lastOrderbook])

  def addToDataset(self, group, datasetname, timestamp, arr):
    if(group.has_key(datasetname) == False):
        group[datasetname] = array.array("d")
    arr[0,0]=timestamp
    a1 = group[datasetname]
    a1.extend(arr[0])

  def addToOrderBook(self, group, timestamp, arr):
    self.addToDataset(self, group, orderBookName, timestamp, arr)

  def insert(self, data):
    ric, timestamp, quotes, trades, levelsUpdated, tradeupdate = data
    delta = dt.timedelta(hours=timestamp.hour,minutes=timestamp.minute, seconds=timestamp.second, microseconds=(timestamp.microsecond/1000))
    timestamp = float(str(delta.seconds)+'.'+str(delta.microseconds)) + self.offset
    ## write to array
    group = self.getGroup(ric)

    orderbookUpdate = False
    orderbookArray = self.getOrderbookArray(ric, group)
    nonzero = quotes.nonzero()
    orderbookArray[nonzero] = quotes[nonzero] 
    if(np.any(nonzero)):
        self.addToDataset(group, orderBookName, timestamp, orderbookArray)
    if(tradeupdate == True):
        self.addToDataset(group, tradeName, timestamp, trades)

感谢。

1 个答案:

答案 0 :(得分:5)

The only reliable way to free memory is to terminate the process.

因此,如果您的主程序spawns a worker process执行大部分工作(在一天内完成的工作),那么当该工作进程完成时,将释放使用的内存:

import multiprocessing as mp

def work(date):
    # Do most of the memory-intensive work here
    ...

while single_date <= self.end_date:
    proc = mp.Process(target = work, args = (single_date,))
    proc.start()
    proc.join()