批量读写,从文本文件到python中的HDF5

时间:2017-08-10 13:53:34

标签: python numpy hdf5

目标是将大型数据集提供给Tensorflow。我来到以下实现。然而,虽然HDF5的io应该非常快,但我的实现速度很慢。这是由于没有使用块功能吗?如果我认为这是第三个维度,我似乎没有得到大块的尺寸。喜欢; (4096,7,1000)chunksize 1000?

请注意,通过找到单个生成器的解决方案,我可以在下面简化我的代码。但是,我认为数据/标签组合非常常见并且对其他人有用。

我使用以下函数创建两个生成器,一个用于数据,另一个用于相应的标签。

def read_chunks(file, dim, batch_size=batch_size):
    chunk = np.empty(dim,)
    current_size = 1
    # read input file line by line
    for line in file:
        current_size += 1
        # build chunk
        chunk = np.vstack((chunk, np.genfromtxt(io.BytesIO(line.encode()))))
        # reaches batch size
        if current_size == batch_size:
            yield chunk
            # reset counters
            current_size = 1
            chunk = np.empty(dim,)

然后我希望将这些生成器生成的数据和标签移动到HDF5。

def write_h5(data_gen, label_gen, out_file, batch_size, h5_batch_size, data_dtype, label_dtype):
    # remove existing file
    if os.path.isfile(out_file):
        os.remove(out_file)
    with h5py.File(out_file, 'a') as f:
        # create a dataset and labelset in the same file
        d = f.create_dataset('data', (batch_size,data_dim), maxshape=(None,data_dim), dtype=data_dtype)
        l = f.create_dataset('label', (batch_size,label_dim), maxshape=(None,label_dim), dtype=label_dtype)
        # use generators to fill both sets
        for data in data_gen:
            d.resize(d.shape[0]+batch_size, axis=0)
            d[-batch_size:] = data
            l.resize(l.shape[0]+batch_size, axis=0)
            l[-batch_size:] = next(label_gen)

使用以下常量,我将两个函数组合在一起;

batch_size = 4096
h5_batch_size = 1000
data_dim = 7 #[NUM_POINT, 9]
label_dim = 1 #[NUM_POINT]
data_dtype = 'float32'
label_dtype = 'uint8'

for data_file, label_file in data_label_files:
    print(data_file)
    with open(data_file, 'r') as data_f, open(label_file, 'r') as label_f:
        data_gen = read_chunks(data_f, dim=data_dim)
        label_gen = read_chunks(label_f, dim=label_dim)
        out_file = data_file[:-4] + '.h5'
        write_h5(data_gen, label_gen, out_file, batch_size, h5_batch_size, data_dtype, label_dtype)

1 个答案:

答案 0 :(得分:3)

问题不在于HDF5很慢。问题是您使用Python循环一次读取一行,每行调用genfromtxt()一次!该函数用于读取整个文件。然后在同一个循环中使用“array = vstack(array,newstuff)`的反模式。

简而言之,您的性能问题从此处开始:

    chunk = np.vstack((chunk, np.genfromtxt(io.BytesIO(line.encode()))))

您应该立即阅读整个文件。如果你不能这样做,请阅读其中的一半(你可以设置每次读取的最大行数,例如100万)。