如何使用h5py

时间:2017-11-02 10:23:34

标签: python numpy deep-learning hdf5 h5py

我正在寻找使用python(h5py)将数据附加到h5文件中的现有数据集的可能性。

我的项目简介:我尝试使用医学图像数据训练CNN。由于在将数据转换为nparrays期间存在大量数据和大量内存,我需要拆分"转换"分成几个数据块 - >加载并预处理前100个医学图像并将nparray保存到hdf5文件 - >加载下一个100个数据集并附加现有的h5文件。

现在我尝试按如下方式存储前100个转换后的nparrays:

import h5py
from LoadIPV import LoadIPV

X_train_data, Y_train_data, X_test_data, Y_test_data = LoadIPV()

with h5py.File('.\PreprocessedData.h5', 'w') as hf:
    hf.create_dataset("X_train", data=X_train_data, maxshape=(None, 512, 512, 9))
    hf.create_dataset("X_test", data=X_test_data, maxshape=(None, 512, 512, 9))
    hf.create_dataset("Y_train", data=Y_train_data, maxshape=(None, 512, 512, 1))
    hf.create_dataset("Y_test", data=Y_test_data, maxshape=(None, 512, 512, 1))

可以看出,转化的nparray被分成四个不同的"组"存储在四个hdf5数据集[X_train,X_test,Y_train,Y_test]中。 LoadIPV()函数执行医学图像数据的预处理。

我的问题是,我想将接下来的100个nparray存储到现有数据集中的同一个h5文件中:这意味着我想要附加例如现有的X_train-dataset [100,512,512,9]接下来的100个nparray,使得X_train变为[200,512,512,9]。这同样适用于其他三个数据集X_test,Y_train,Y_test。

非常感谢你的帮助!

2 个答案:

答案 0 :(得分:22)

我找到了一个似乎有效的解决方案!

看看这个:incremental writes to hdf5 with h5py

为了将数据附加到特定数据集,有必要首先调整相应轴中的特定数据集的大小,然后在“旧”的末尾附加新数据。 nparray。

因此,解决方案如下所示:

with h5py.File('.\PreprocessedData.h5', 'a') as hf:
    hf["X_train"].resize((hf["X_train"].shape[0] + X_train_data.shape[0]), axis = 0)
    hf["X_train"][-X_train_data.shape[0]:] = X_train_data

    hf["X_test"].resize((hf["X_test"].shape[0] + X_test_data.shape[0]), axis = 0)
    hf["X_test"][-X_test_data.shape[0]:] = X_test_data

    hf["Y_train"].resize((hf["Y_train"].shape[0] + Y_train_data.shape[0]), axis = 0)
    hf["Y_train"][-Y_train_data.shape[0]:] = Y_train_data

    hf["Y_test"].resize((hf["Y_test"].shape[0] + Y_test_data.shape[0]), axis = 0)
    hf["Y_test"][-Y_test_data.shape[0]:] = Y_test_data

答案 1 :(得分:4)

@Midas.Inc 的回答效果很好。只是为有兴趣的人提供一个最小的工作示例:

import numpy as np
import h5py

f = h5py.File('MyDataset.h5', 'a')
for i in range(10):

  # Data to be appended
  new_data = np.ones(shape=(100,64,64)) * i
  new_label = np.ones(shape=(100,1)) * (i+1)

  if i == 0:
    # Create the dataset at first
    f.create_dataset('data', data=new_data, compression="gzip", chunks=True, maxshape=(None,64,64))
    f.create_dataset('label', data=new_label, compression="gzip", chunks=True, maxshape=(None,1)) 
  else:
    # Append new data to it
    f['data'].resize((f['data'].shape[0] + new_data.shape[0]), axis=0)
    f['data'][-new_data.shape[0]:] = new_data

    f['label'].resize((f['label'].shape[0] + new_label.shape[0]), axis=0)
    f['label'][-new_label.shape[0]:] = new_label

  print("I am on iteration {} and 'data' chunk has shape:{}".format(i,f['data'].shape))

f.close()

代码输出:

#I am on iteration 0 and 'data' chunk has shape:(100, 64, 64)
#I am on iteration 1 and 'data' chunk has shape:(200, 64, 64)
#I am on iteration 2 and 'data' chunk has shape:(300, 64, 64)
#I am on iteration 3 and 'data' chunk has shape:(400, 64, 64)
#I am on iteration 4 and 'data' chunk has shape:(500, 64, 64)
#I am on iteration 5 and 'data' chunk has shape:(600, 64, 64)
#I am on iteration 6 and 'data' chunk has shape:(700, 64, 64)
#I am on iteration 7 and 'data' chunk has shape:(800, 64, 64)
#I am on iteration 8 and 'data' chunk has shape:(900, 64, 64)
#I am on iteration 9 and 'data' chunk has shape:(1000, 64, 64)