如何使用张量流数据集读取多个.mat文件(太大而无法容纳在内存中)

时间:2019-09-19 14:51:45

标签: python tensorflow pickle tensorflow-datasets

我有大约550K样本,每个样本为200x50x1。该数据集的大小约为57GB。

我想在此设备上训练网络,但无法读取它。

batch_size=8

def _read_py_function(filename,labels_slice):
    with h5py.File(filename, 'r') as f:
        data_slice = np.asarray(f['feats'])
        print(data_slice.shape)
    return data_slice, labels_slice

placeholder_files = tf.placeholder(tf.string, [None])
placeholder_labels = tf.placeholder(tf.int32, [None])

dataset = tf.data.Dataset.from_tensor_slices((placeholder_files,placeholder_labels))
dataset = dataset.map(
    lambda filename, label: tuple(tf.py_func(
        _read_py_function, [filename,label], [tf.uint8, tf.int32])))

dataset = dataset.shuffle(buffer_size=50000)
dataset = dataset.batch(batch_size)

iterator = tf.data.Iterator.from_structure(dataset.output_types, dataset.output_shapes)
data_X, data_y = iterator.get_next()
data_y = tf.cast(data_y, tf.int32)

net = conv_layer(inputs=data_X,num_outputs=8, kernel_size=3, stride=2, scope='rcl_0')
net = pool_layer(inputs=net,kernel_size=2,scope='pl_0')

net = dropout_layer(inputs=net,scope='dl_0')

net = flatten_layer(inputs=net,scope='flatten_0')
net = dense_layer(inputs=net,num_outputs=256,scope='dense_0')
net = dense_layer(inputs=net,num_outputs=64,scope='dense_1')
out = dense_layer(inputs=net,num_outputs=10,scope='dense_2')

然后我使用:

sess.run(train_iterator, feed_dict = {placeholder_files: filenames, placeholder_labels: ytrain})
        try:
            while True:
                _, loss, acc = sess.run([train_op, loss_op, accuracy_op])
                train_loss += loss 
                train_accuracy += acc
        except tf.errors.OutOfRangeError:
            pass

但是我什至在运行会话之前就收到了错误消息:

Traceback (most recent call last):
  File "SFCC-trial-134.py", line 297, in <module>
    net = rcnn_layer(inputs=data_X,num_outputs=8, kernel_size=3, stride=2, scope='rcl_0')
  File "SFCC-trial-134.py", line 123, in rcnn_layer
    reuse=False)
  File "SFCC-trial-134.py", line 109, in conv_layer
    reuse         = reuse
  File "/home/priyam.jain/tensorflow-gpu-python3/lib/python3.5/site-packages/tensorflow/contrib/framework/python/ops/arg_scope.py", line 183, in func_with_args
    return func(*args, **current_args)
  File "/home/priyam.jain/tensorflow-gpu-python3/lib/python3.5/site-packages/tensorflow/contrib/layers/python/layers/layers.py", line 1154, in convolution2d
    conv_dims=2)
  File "/home/priyam.jain/tensorflow-gpu-python3/lib/python3.5/site-packages/tensorflow/contrib/framework/python/ops/arg_scope.py", line 183, in func_with_args
    return func(*args, **current_args)
  File "/home/priyam.jain/tensorflow-gpu-python3/lib/python3.5/site-packages/tensorflow/contrib/layers/python/layers/layers.py", line 1025, in convolution
    (conv_dims + 2, input_rank))
TypeError: %d format: a number is required, not NoneType

尽管我打算使用TFRecords,但是创建它们却很困难。在学习如何为我的数据集创建它们的地方找不到好的帖子。

conv_layer定义如下:

def conv_layer(inputs, num_outputs, kernel_size, stride, normalizer_fn=None, activation_fn=nn.relu, trainable=True, scope='noname', reuse=False):

    net = slim.conv2d(inputs = inputs,
        num_outputs   = num_outputs,
        kernel_size   = kernel_size,
        stride        = stride,
        normalizer_fn = normalizer_fn,
        activation_fn = activation_fn,
        trainable     = trainable,
        scope         = scope,
        reuse         = reuse
        )
    return net

1 个答案:

答案 0 :(得分:0)

请勿在地图函数内传递tf.py_func。您可以通过直接在map函数内部传递函数名称来读取文件图像。我只摆出了代码的相关部分。

def _read_py_function(filename, label):
    return tf.zeros((224, 224, 3), dtype=tf.float32), tf.ones((1,), dtype=tf.int32)

dataset = dataset.map(lambda filename, label: _read_py_function(filename, label))

另一个变化是您的iterator将只期望输入的浮点数。因此,您必须将tf.uint8的输出类型更改为float