训练此模型会导致内存泄漏

时间:2019-11-09 15:46:04

标签: python tensorflow keras

我一直在训练模型,使用htop可以看到内存每次迭代都在增加。环顾四周,大多数人说图必须继续增长,因为我每次迭代都会加载新模型,或者因为我添加了新操作,但以上都不做。 这是最小的可复制示例。

from tensorflow.keras.layers import Dense
from tensorflow.keras.optimizers import Adam
import tensorflow as tf
import numpy as np

#%% Params
OBSERVATION_SPACE_VALUES = 4
ACTION_SPACE_SIZE = 2
LEARNING_RATE = 0.00025/4
DENSE_PARAMS = [256]

class Network():
    def __init__(self, state_size=OBSERVATION_SPACE_VALUES, action_size=ACTION_SPACE_SIZE, learning_rate=LEARNING_RATE,    
                 dense_params=DENSE_PARAMS):

        self.state_size = state_size
        self.action_size = action_size
        self.learning_rate= learning_rate        
        self.model = self.create_model(dense_params)

    def create_model(self, dense_params=[256]):

        model = Sequential()
        for params in dense_params:
            units = params
            model.add(Dense(units, activation='relu',input_shape=[self.state_size]))

        model.add(Dense(self.action_size, activation="linear"))
        model.compile(loss="mse", optimizer=Adam(lr=self.learning_rate))
        return model

Agent = Network()

for i in range(10_000):
    state = np.random.rand(Agent.state_size)
    state = np.expand_dims(state, axis=0)
    output = np.random.rand(Agent.action_size)
    output = np.expand_dims(output, axis=0)
    Agent.model.fit(state,output,verbose=True)

还有:

tf.__version__
2.0.0
tf.keras.__version__
2.2.4-tf

1 个答案:

答案 0 :(得分:-1)

问题是使用多个.fit调用。要解决此问题,您可以:

  • 创建数据的data_generator并调用.fit(epochs=10000)

  • 保留for循环,但改为调用`train_on_batch (doc)