Python二进制文件操作加速

时间:2013-12-19 19:29:11

标签: python file-io binary

我正在使用python读取大量数据并将它们拆分成各种文件。我正在寻找一种方法来加快我已有的代码。进来的数字是小端32位浮点数。我已经进行了几次测试。

首先测试8分钟完成:

f = open('filename','rb')
#file_out is a list of many open writing files 'wb'
while chunk:
    for i in range(self.num_files):
        chunk = f.read(4)
        file_out[i].write(chunk)

这个速度可以接受,但是当我尝试添加一些操作时,事情会急剧减慢到56分钟:

file_old = [0,0,0,...,0]
f = open('filename','rb')
#file_out is a list of many open writing files 'wb'
while chunk:
    for i in range(self.num_files):
        chunk = f.read(4)
        num_chunk = numpy.fromstring(chunk, dtype = numpy.float32)

        file_out[i].write(num_chunk-file_old[i])
        file_old[i] = num_chunk

我在缩短的样本上对上面的代码运行了cProfile。结果如下:

写= 3.457

Numpy fromstring = 2.274

读= 1.370

我怎样才能加快速度呢?

2 个答案:

答案 0 :(得分:1)

我能够使用numpy.fromfile发现一种更快捷的数据读取方式。我写了一个快速的小测试脚本,如下所示:

from os.path import join
import numpy
import struct
from time import time


def main():

    #Set the path name and filename
    folder = join("Tone_Tests","1khz_10ns_0907153323")
    fn = join(folder,"Channel1.raw32")


    #Test 1
    start = time()
    f = open(fn,'rb')
    array = read_fromstring(f)
    f.close()
    print "Test fromString = ",time()-start
    del array

    #Test 2
    start = time()
    f = open(fn,'rb')
    array = read_struct(f)
    f.close()
    print "Test fromStruct = ",time()-start
    del array

    #Test 3
    start = time()
    f = open(fn,'rb')
    array = read_fromfile(f)
    f.close()
    print "Test fromfile = ",time()-start
    del array


def read_fromstring(f):
    #Use Numpy fromstring, read each 4 bytes, convert, store in list
    data = []

    chunk = f.read(4)

    while chunk:
        num_chunk = numpy.fromstring(chunk, dtype = 'float32')
        data.append(num_chunk)

        chunk = f.read(4)

    return numpy.array(data)

def read_struct(f):
    #Same as numpy froms string but using the struct.
    data = []

    chunk = f.read(4)

    while chunk:
        num_chunk = struct.unpack('<f',chunk)
        data.append(num_chunk)

        chunk = f.read(4)

    return numpy.array(data)

def read_fromfile(f):
    return numpy.fromfile(f, dtype = 'float32', count = -1)

终端的定时输出是:

Test fromString =  4.43499994278
Test fromStruct =  2.42199993134
Test fromfile =  0.00399994850159

使用python -m cProfile -s time filename.py > profile.txt表示时间是:

 ncalls  tottime  percall  cumtime  percall filename:lineno(function)
        1    1.456    1.456    4.272    4.272 Read_Data_tester.py:42(read_fromstring)
        1    1.162    1.162    2.369    2.369 Read_Data_tester.py:56(read_struct)
        1    0.000    0.000    0.005    0.005 Read_Data_tester.py:70(read_fromfile)

答案 1 :(得分:-2)

我认为您可以使用线程(使用线程模块)。

这将使您使用主代码在parralel中运行函数,因此您可以通过文件的三分之一开始,另一半开始,另一半开始。因此,每个人只需要处理一个数据季度,因此 只需要一个时间。

(我说应该有开销所以不会那么快)