使用Tensorflow迭代器加载.npz的有效方法

时间:2018-08-28 14:35:22

标签: python tensorflow neural-network deep-learning conv-neural-network

我有一个很大的.npz numpy训练文件,我想更有效地阅读它。我尝试遵循Tensorflow文档(https://www.tensorflow.org/guide/datasets#consuming_numpy_arrays)中的方法:

  

作为替代方案,您可以根据以下内容定义数据集:   tf.placeholder()张量,并在您输入NumPy数组时   在数据集上初始化Iterator。

但是,在实现迭代器之后,我的模型甚至消耗了两倍多的内存。您有任何线索在这里可能出什么问题吗?

def model(batch_size):
    x = tf.placeholder(tf.float32,[None, IMGSIZE,IMGSIZE,1])
    y = tf.placeholder(tf.float32,[None, n_landmark * 2])
    z = tf.placeholder(tf.int32, [None, ])

    Ret_dict['x'] = x
    Ret_dict['y'] = y
    Ret_dict['z'] = z
    Ret_dict['iterator'] = iter_

   dataset = tf.data.Dataset.from_tensor_slices((x, y, z)).batch(batch_size)
   iter_ = dataset.make_initializable_iterator()
   InputImage, GroundTruth, GroundTruth_Em = iter_.get_next()

   Conv1a = tf.layers.conv2d(InputImage,64,3,1,..)
   (...)


def main():
    trainSet = np.load(args.datasetDir)
    Xtrain = trainSet['Image']
    Ytrain = trainSet['Label_1']
    Ytrain_em = trainSet['Label_2']

    with tf.Session() as sess:
        my_model = model(BATCH_SIZE)
        Saver = tf.train.Saver()
        Saver.restore(sess, args.pretrainedModel)

        sess.run(
            [model['Optimizer'], model['iterator'].initializer],
                    feed_dict={model['x']:Xtrain,    
                               model['y']:Ytrain,
                               model['z']:Ytrain_em})

0 个答案:

没有答案