如何处理GPU(keras)没有释放的内存?

时间:2016-10-03 17:44:26

标签: keras

train_generator = train_generator()
test_generator = test_generator()
print "...training model"
history = []
for i in xrange(NB_EPOCHS):
    start_time = time.time()
    avg_train_loss = [0.0] * 5
    avg_test_loss = [0.0] * 5
    for j in xrange(NB_TRAIN_ITERATIONS):
        (batch_data, batch_heatmaps) = train_generator.next()
        loss = model.train_on_batch(batch_data, batch_heatmaps)
        avg_train_loss += np.array(loss)
    avg_train_loss = avg_train_loss / NB_TRAIN_ITERATIONS
    for j in xrange(NB_TEST_ITERATIONS):
        (batch_data, batch_heatmaps) = test_generator.next()
        loss = model.test_on_batch(batch_data, batch_heatmaps)
        avg_test_loss += np.array(loss)
    avg_test_loss = avg_test_loss / NB_TEST_ITERATIONS        
    end_time = time.time()
    print "[Epoch %d]" % (i+1)
    print "Time spent: %.2f seconds" % (end_time - start_time) 
    print "Total train loss : %.16f | Total validation loss : %.16f" % (avg_train_loss[0], avg_test_loss[0])
    print "  hg1 train loss : %.16f |   hg1 validation loss : %.16f" % (avg_train_loss[1], avg_test_loss[1])
    print "  hg2 train loss : %.16f |   hg2 validation loss : %.16f" % (avg_train_loss[2], avg_test_loss[2])
    print "  hg3 train loss : %.16f |   hg3 validation loss : %.16f" % (avg_train_loss[3], avg_test_loss[3])
    print "  hg4 train loss : %.16f |   hg4 validation loss : %.16f" % (avg_train_loss[4], avg_test_loss[4])
    history.append([avg_train_loss, avg_test_loss])
    gc.collect()
np.save(PATH_HISTORY, history)
model.save(PATH_MODEL)

以下是我的代码。显然,我在每个时代之后调用gc.collect(),然而,我的GPU在第3纪元耗尽了内存。我认为用于第一个纪元的记忆应该在它到达第3纪元时释放,特别是当这是一个每个时期运行19世纪的大型建筑时,但事实并非如此。我能知道我做错了什么吗?我正在使用带有12GB内存的pascal Titan X.我尝试了tensorflow和theano。

0 个答案:

没有答案