内存泄漏Keras TensorFlow1.8.0

时间:2018-05-14 13:13:22

标签: python-3.x tensorflow memory-leaks keras deep-learning

我需要帮助以尽量减少可疑代码的内存泄漏。

我正在使用Keras最新的tensorflow 1.8.0和python 3.6

当程序开始逐渐成长为giganytes .. !!需要帮助。

我正在使用VGG16网络对图像进行分类。我无法本地化导致内存泄漏的问题。

是否存在张量流错误或python遭受此类工作

代码是:

class_labels = ['cc','','cc','xx']

image = load_img(img_path, target_size=target_size)

image_arr = img_to_array(image) # convert from PIL Image to NumPy array
image_arr /= 255

image_arr = np.expand_dims(image_arr, axis=0)

model = applications.VGG16(include_top=False, weights='imagenet')  

bottleneck_features = model.predict(image_arr) 

model = create_top_model("softmax", bottleneck_features.shape[1:])

model.load_weights("res/_top_model_weights.h5")
numpy_horizontal_concat = cv2.imread(img_path)
xxx=1
path ="/home/dataset/test"
listOfFiles = os.listdir(path)
random.shuffle(listOfFiles)
pattern = "*.jpg"
model = applications.VGG16(include_top=False, weights='imagenet')

for entry in listOfFiles:
    if fnmatch.fnmatch(entry, pattern):
        image = load_img(path+"/"+ entry, target_size=target_size)
        start_time = time.time()

        image_arr = img_to_array(image)  # convert from PIL Image to NumPy array
        image_arr /= 255

        image_arr = np.expand_dims(image_arr, axis=0)

        bottleneck_features = model.predict(image_arr)

        model2 = create_top_model("softmax", bottleneck_features.shape[1:])

        model2.load_weights("res/_top_model_weights.h5")


        predicted = model2.predict(bottleneck_features)
        decoded_predictions = dict(zip(class_labels, predicted[0]))
        decoded_predictions = sorted(decoded_predictions.items(), key=operator.itemgetter(1), reverse=True)
        elapsed_time = time.time() - start_time

        print()
        count = 1
        for key, value in decoded_predictions[:5]:
            print("{}. {}: {:8f}%".format(count, key, value * 100))
            print("time:  " , time.strftime("%H:%M:%S", time.gmtime(elapsed_time)) , "  - " , elapsed_time)
            count += 1

        #OPENCV concat test
        #numpy_horizontal_concat = np.concatenate((mat_image,numpy_horizontal_concat), axis=0)

        hide_img = True
        model2=""
        predicted=""
        image_arr=""
        image=""

1 个答案:

答案 0 :(得分:12)

在for循环中,您可以构建一个带有加载权重的新模型。此模型是在tensorflow会话中构建的,您不会重置该会话。因此,您的会话与许多模型一起构建,而不会删除任何模型。

有两种可能的解决方案:

  1. 尝试优化您只需加载模型一次的代码。这样你的代码也会更快得多
  2. 重置会话:
  3. 我强烈建议使用第一种解决方案,但如果不可行:

    from keras import backend as K
    K.clear_session()