Python输入/输出更有效

时间:2017-04-07 23:34:51

标签: python loops numpy input processing-efficiency

我需要处理超过1000万个光谱数据集。数据结构如下:大约有1000个.fits(.fits是一些数据存储格式)文件,每个文件包含大约600-1000个光谱,其中每个光谱中有大约4500个元素(因此每个文件返回~1000个* 4500矩阵)。这意味着如果我要遍历1000万个条目,每个光谱将被重复读取大约10次(或者每个文件将被重复读取大约10,000次)。虽然相同的光谱重复读取大约10次,但它并不重复,因为每次我提取相同光谱的不同片段。

我有一个目录文件,其中包含我需要的所有信息,例如坐标xy,半径r,力量s等。目录还包含用于定位我将要阅读的文件(由n1n2标识)以及我将使用该文件中的哪些光谱(由n3标识)的信息。

我现在的代码是:

import numpy as np
from itertools import izip
import fitsio

x = []
y = []
r = []
s = []
n1 = []
n2 = []
n3 = []
with open('spectra_ID.dat') as file_ID, open('catalog.txt') as file_c:
    for line1, line2 in izip(file_ID,file_c):
        parts1 = line1.split()
        parts2 = line2.split()
        n1.append(parts1[0])
        n2.append(parts1[1])
        n3.append(float(parts1[2]))
        x.append(float(parts2[0]))         
        y.append(float(parts2[1]))        
        r.append(float(parts2[2]))
        s.append(float(parts2[3]))  

def data_analysis(idx_start,idx_end):  #### loop over 10 million entries
    data_stru = np.zeros((idx_end-idx_start), dtype=[('spec','f4',(200)),('x','f8'),('y','f8'),('r','f8'),('s','f8')])

    for i in xrange(idx_start,idx_end)
        filename = "../../../data/" + str(n1[i]) + "/spPlate-" + str(n1[i]) + "-" + str(n2[i]) + ".fits"
        fits_spectra = fitsio.FITS(filename)
        fluxx = fits_spectra[0][n3[i]-1:n3[i],0:4000]  #### return a list of list
        flux = fluxx[0]
        hdu = fits_spectra[0].read_header()
        wave_start = hdu['CRVAL1']
        logwave = wave_start + 0.0001 * np.arange(4000)
        wavegrid = np.power(10,logwave)

    ##### After I read the flux and the wavegrid, then I can do my following analysis.

    ##### save data to data_stru

    ##### Reading is the most time-consuming part of this code, my later analysis is not time consuming.

问题是文件太大,没有足够的内存可以立即加载它,而且我的目录没有结构化,所有打开同一文件的条目都被组合在一起。我想知道是否有人可以提供一些想法将大循环分成两个循环:1)首先循环文件,以便我们可以避免反复打开/读取文件,2)循环到将要访问的条目使用相同的文件。

1 个答案:

答案 0 :(得分:1)

如果我正确理解您的代码,n1n2将确定要打开的文件。那你为什么不只是lexsort他们呢?然后,您可以使用itertools.groupby对具有相同n1n2的记录进行分组。这是一个缩小概念的证据:

import itertools

n1 = np.random.randint(0, 3, (10,))
n2 = np.random.randint(0, 3, (10,))
mockdata = np.arange(10)+100

s = np.lexsort((n2, n1))

for k, g in itertools.groupby(zip(s, n1[s], n2[s]), lambda x: x[1:]):
    # groupby groups the iterations i of its first argument
    # (zip(...) in this case) by the result of applying the
    # optional second argument (here lambda) to i.
    # Here we use the lambda expression to remove si from the
    # tuple (si, n1si, n2si) that zip produces because otherwise
    # equal (n1si, n2si) pairs would still be treated as different
    # because of the distinct si's. Hence no grouping would occur.
    # Putting si in there in the first place is necessary, so we
    # we can reference the other records of the corresponding row
    # in the inner loop.
    print(k)
    for si, n1s, ns2 in g:
        # si can be used to access the corresponding other records
        print (si, mockdata[si])

打印类似:

(0, 1)
4 104
(0, 2)
0 100
2 102
6 106
(1, 0)
1 101
(2, 0)
8 108
9 109
(2, 1)
3 103
5 105
7 107

您可能希望在n3中加入lexsort,但不要在分组中加入,以便您可以处理文件'内容有序。