在交叉验证期间释放GPU内存

时间:2016-12-07 07:24:11

标签: neural-network gpu theano keras cross-validation

我正在尝试使用Keras和Theano后端在图像分类网络上运行交叉验证,使用scikit-learn KFold来分割数据。但是,训练运行正常3次,我在GPU上出现内存不足错误。

我没有做任何事情来在每次折叠结束时释放GPU内存。有人可以告诉我是否可以在开始新的折叠之前清除GPU内存。

1 个答案:

答案 0 :(得分:0)

我最近遇到了同样的问题,这不是一个很好的解决方案,因为它并没有真正清除内存。

但是,我的建议是创建+编译模型一次并保存初始权重。然后,在每次折叠开始时重新加载权重。

类似下面的代码:

from sklearn.model_selection import KFold
from sklearn.model_selection import cross_val_score
from functools import partial
import numpy as np
from keras.applications import VGG16

# We create our model only once
def create_model():
    model_vgg16_conv = VGG16(weights='imagenet', include_top=True)

    model_vgg16_conv.compile(optimizer="adam", loss="mean_squared_error")
    return model_vgg16_conv, model_vgg16_conv.get_weights()

# we initialize it multiple times
def init_weight(same_old_model, first_weights):
    ## we can uncomment the line below to reshufle the weights themselves so they are not exactly the same between folds
    ## weights = [np.random.permutation(x.flat).reshape(x.shape) for x in first_weights]

    same_old_model.set_weights(weights)


model_vgg16_conv, weights = create_model()


# we create just random data compliant with the vgg16 architecture and the 1000 imagenet labels
data = np.random.randint(0,255, size=(100, 224,224,3))
labels = np.random.randint(0,1, size=(100, 1000))

cvscores = []
kfold = KFold(n_splits=10, shuffle=True)
for train, test in kfold.split(data, labels):
    print("Initializing Weights...")
    ## instead of creating a new model, we just reset its weights
    init_weight(model_vgg16_conv, weights)

    # fit as usual, but using the split that came from KFold
    model_vgg16_conv.fit(data[train], labels[train], epochs=2)

    scores = model_vgg16_conv.evaluate(data[test], labels[test])

    #evaluation
    print("%s: %.2f%%" % (model_vgg16_conv.metrics_names[0], scores))
    cvscores.append(scores)

print("%.2f (+/- %.2f)" % (np.mean(cvscores), np.std(cvscores)))