写入HDF5并随机播放大数据

时间:2017-01-25 10:50:37

标签: python memory hdf5 h5py

我已经下载了Caltech101。其结构是:

#Caltech101 dir #class1 dir #images of class1 jpgs #class2 dir #images of class2 jpgs ... #class100 dir #images of class100 jpgs

我的问题是,我无法保留两个形状为xy的np数组(9144, 240, 180, 3)(9144)。所以我的解决方案是分配一个h5py数据集,将它们加载到2个块中,然后将它们一个接一个地写入文件。确切地说:

from __future__ import print_function
import os
import glob
from scipy.misc import imread, imresize
from sklearn.utils import shuffle
import numpy as np
import h5py
from time import time


def load_chunk(images_dset, labels_dset, chunk_of_classes, counter, type_key, prev_chunk_length):
    # getting images and processing
    xtmp = []
    ytmp = []
    for label in chunk_of_classes:
        img_list = sorted(glob.glob(os.path.join(dir_name, label, "*.jpg")))
        for img in img_list:
            img = imread(img, mode='RGB')
            img = imresize(img, (240, 180))
            xtmp.append(img)
            ytmp.append(label)
        print(label, 'done')

    x = np.concatenate([arr[np.newaxis] for arr in xtmp])
    y = np.array(ytmp, dtype=type_key)
    print('x: ', type(x), np.shape(x), 'y: ', type(y), np.shape(y))

    # writing to dataset
    a = time()
    images_dset[prev_chunk_length:prev_chunk_length+x.shape[0], :, :, :] = x
    print(labels_dset.shape)
    print(y.shape, y.shape[0])
    print(type(y), y.dtype)
    print(prev_chunk_length)
    labels_dset[prev_chunk_length:prev_chunk_length+y.shape[0]] = y
    b = time()
    print('Chunk', counter, 'written in', b-a, 'seconds')
    return prev_chunk_length+x.shape[0]


def write_to_file(remove_DS_Store):
    if os.path.isfile('caltech101.h5'):
        print('File exists already')
        return
    else:
        # the name of each dir is the name of a class
        classes = os.listdir(dir_name)
        if remove_DS_Store:
            classes.pop(0)  # removes .DS_Store - may not be used on other terminals

        # need the dtype of y in order to initialize h5 dataset
        s = ''
        key_type_y = s.join(['S', str(len(max(classes, key=len)))])
        classes = np.array(classes, dtype=key_type_y)

        # number of chunks in which the dataset must be divided
        nb_chunks = 2
        nb_chunks_loaded = 0
        prev_chunk_length = 0
        # open file and allocating a dataset
        f = h5py.File('caltech101.h5', 'a')
        imgs = f.create_dataset('images', shape=(9144, 240, 180, 3), dtype='uint8')
        labels = f.create_dataset('labels', shape=(9144,), dtype=key_type_y)
        for class_sublist in np.array_split(classes, nb_chunks):
            # loading chunk by chunk in a function to avoid memory overhead
            prev_chunk_length = load_chunk(imgs, labels, class_sublist, nb_chunks_loaded, key_type_y, prev_chunk_length)
            nb_chunks_loaded += 1
        f.close()
        print('Images and labels saved to \'caltech101.h5\'')
    return

dir_name = '../Datasets/Caltech101'
write_to_file(remove_DS_Store=True)

这非常有效,阅读实际上也足够快。问题是我需要改组数据集。

观察:

  • 改组数据集对象:显然因为它们在磁盘上而变慢了。

  • 创建一个混洗索引数组并使用高级numpy索引。这意味着从文件中读取速度较慢。

  • 在写入文件之前进行随机播放会很好,问题:每次内存中只有大约一半的数据集。我会得到一个不正当的洗牌。

你能想到一种在写作之前洗牌的方法吗?我也开放了重新思考写作过程的解决方案,只要它不会占用大量内存。

1 个答案:

答案 0 :(得分:2)

您可以在读取图像数据之前随机播放文件路径。

不是将内容中的图像数据混洗,而是创建属于数据集的所有文件路径的列表。然后随机播放文件路径列表。现在,您可以像以前一样创建HDF5数据库。

例如,您可以使用glob创建用于随机播放的文件列表:

import glob
import random

files = glob.glob('../Datasets/Caltech101/*/*.jpg')
shuffeled_files = random.shuffle(files)

然后,您可以从路径中检索类标签和图像名称:

import os

for file_path in shuffeled_files:
    label = os.path.basename(os.path.dirname(file_path))
    image_id = os.path.splitext(os.path.basename(file_path))[0]