在Tensorflow上训练卷积神经网络时,GPU内存不足

时间:2018-08-20 15:32:19

标签: python tensorflow deep-learning conv-neural-network batching

我正在使用卷积神经网络在使用Tensorflow 1.9的GTX1080 Ti上训练约9000张图像(300x500),但是每次都遇到超出内存的问题。我收到有关系统内存超出10%的警告,几分钟后该进程被终止。我的代码如下。

import tensorflow as tf
from os import listdir

train_path = '/media/NewVolume/colorizer/img/train/'  
col_train_path = '/media/NewVolume/colorizer/img/colored/train/'
val_path = '/media/NewVolume/colorizer/img/val/'
col_val_path = '/media/NewVolume/colorizer/img/colored/val/'

def load_image(image_file):
    image = tf.read_file(image_file)
    image = tf.image.decode_jpeg(image)
    return image

train_dataset = []
col_train_dataset = []
val_dataset = []
col_val_dataset = []

for i in listdir(train_path): 
    train_dataset.append(load_image(train_path + i))
    col_train_dataset.append(load_image(col_train_path + i))

for i in listdir(val_path): 
    val_dataset.append(load_image(val_path + i))
    col_val_dataset.append(load_image(col_val_path + i))

train_dataset = tf.stack(train_dataset)
col_train_dataset = tf.stack(col_train_dataset)
val_dataset = tf.stack(val_dataset)
col_val_dataset = tf.stack(col_val_dataset)

input1 = tf.placeholder(tf.float32, [None, 300, 500, 1])
color = tf.placeholder(tf.float32, [None, 300, 500, 3])

#MODEL

conv1 = tf.layers.conv2d(inputs = input1, filters = 8, kernel_size=[5, 5], activation=tf.nn.relu, padding = 'same')
pool1 = tf.layers.max_pooling2d(inputs = conv1, pool_size=[2, 2], strides=2)
conv2 = tf.layers.conv2d(inputs = pool1, filters = 16, kernel_size=[5, 5], activation=tf.nn.relu, padding = 'same')
pool2 = tf.layers.max_pooling2d(inputs = conv2, pool_size=[2, 2], strides=2)
conv3 = tf.layers.conv2d(inputs = pool2, filters = 32, kernel_size=[5, 5], activation=tf.nn.relu, padding = 'same')
pool3 = tf.layers.max_pooling2d(inputs = conv3, pool_size=[2, 2], strides=2)

flat = tf.layers.flatten(inputs = pool3)
dense = tf.layers.dense(flat, 2432, activation = tf.nn.relu)
reshaped = tf.reshape(dense, [tf.shape(dense)[0],38, 64, 1])

conv_trans1 = tf.layers.conv2d_transpose(inputs = reshaped, filters = 32, kernel_size=[5, 5], activation=tf.nn.relu, padding = 'same')
upsample1 = tf.image.resize_nearest_neighbor(conv_trans1, (2*tf.shape(conv_trans1)[1],2*tf.shape(conv_trans1)[2]))

conv_trans2 = tf.layers.conv2d_transpose(inputs = upsample1, filters = 16, kernel_size=[5, 5], activation=tf.nn.relu, padding = 'same')
upsample2 = tf.image.resize_nearest_neighbor(conv_trans2, (2*tf.shape(conv_trans2)[1],2*tf.shape(conv_trans2)[2]))
conv_trans3 = tf.layers.conv2d_transpose(inputs = upsample2, filters = 8, kernel_size=[5, 5], activation=tf.nn.relu, padding = 'same')
upsample3 = tf.image.resize_nearest_neighbor(conv_trans3, (2*tf.shape(conv_trans3)[1],2*tf.shape(conv_trans3)[2]))

conv_trans4 = tf.layers.conv2d_transpose(inputs = upsample3, filters = 3, kernel_size=[5, 5], activation=tf.nn.relu, padding = 'same')

reshaped2 = tf.reshape(dense, [tf.shape(conv_trans4)[0],300,500,3])

#TRAINING

loss = tf.losses.mean_squared_error(color, reshaped2)
train_step = tf.train.AdamOptimizer(1e-4).minimize(loss)

EPOCHS = 10
BATCH_SIZE = 3

dataset = tf.data.Dataset.from_tensor_slices((train_dataset,col_train_dataset)).repeat().batch(BATCH_SIZE)
iterator = dataset.make_one_shot_iterator()

with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    for i in range(EPOCHS):
        x,y=iterator.get_next()
        _, loss_value = sess.run([train_step, loss],feed_dict={input1:x.eval(session=sess),color:y.eval(session=sess)})
        print("Iter: {}, Loss: {:.4f}".format(i, loss_value))

1 个答案:

答案 0 :(得分:1)

我认为您的问题出在以下代码中。

  stage('test') {
            withEnv(["PATH=$PATH:~/.local/bin"]){
                    sh "bash test.sh"
                }
        }   

您正在尝试将TF张量操作用作常规代码。但是最终您会得到图上仅在会话中进行评估的节点。在这种情况下,您正在尝试将训练和评估数据集中的每张图像都加载到GPU内存中(因为会话在GPU上运行)。我猜想您的图像多于GPU的内存。

有多个解决此问题的方法。您可以使def load_image(image_file): image = tf.read_file(image_file) image = tf.image.decode_jpeg(image) return image ... for i in listdir(train_path): train_dataset.append(load_image(train_path + i)) col_train_dataset.append(load_image(col_train_path + i)) 操作成为图形的一部分,并在训练循环中将每个批次的图像名称作为提要传递。您可以构建适当的输入管道,以在图中处理文件名,批处理和文件数据的加载,也可以使用一些外部库将图像加载到numpy数组中,并将numpy数组输入到图中。