我正在编写一些代码来优化神经网络架构,因此有一个python函数create_nn(parms)
可以创建和初始化keras模型。
然而,我遇到的问题是,经过较少的迭代后,模型比平时需要更长的训练时间(最初一个时期需要10秒,然后在大约第14个模型之后(每个模型训练20个时期)需要60秒/历元)。
我知道这不是因为不断发展的体系结构,因为如果我重新启动脚本并开始它结束,它将恢复正常速度。
我目前正在运行
from keras import backend as K
然后是
K.clear_session()
训练任何给定的新模型后。
其他一些细节:
对于前12个型号,每个时期的训练时间大致保持在10秒/纪元。然后在第13个模型训练时间每个时期稳步上升到60秒。然后每个纪元的训练时间在60秒/纪元左右徘徊。
我以Tensorflow作为后端运行keras
我正在使用Amazon EC2 t2.xlarge实例
有足够的可用内存(7GB免费,带5GB大小的数据集)
我删除了一堆图层和参数,但基本上create_nn
看起来像:
def create_nn(features, timesteps, number_of_filters):
inputs = Input(shape = (timesteps, features))
x = GaussianNoise(stddev=0.005)(inputs)
#Layer 1.1
x = Convolution1D(number_of_filters, 3, padding='valid')(x)
x = Activation('relu')(x)
x = Flatten()(x)
x = Dense(10)(x)
x = BatchNormalization()(x)
x = Activation('relu')(x)
x = Dropout(0.5)(x)
# Output layer
outputs = Dense(1, activation='sigmoid')(x)
model = Model(inputs=inputs, outputs=outputs)
# Compile and Return
model.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ['accuracy'])
print('CNN model built succesfully.')
return model
请注意,虽然Sequential
模型在此虚拟示例中可以使用,但实际用例需要功能API。
如何解决此问题?
答案 0 :(得分:0)
简短的回答:您需要在创建的每个新模型之前使用tf.keras.backend.clear_session()
。
仅当急切的执行关闭时,才会出现此问题。
好的,让我们在有和没有clear_session的情况下进行一次实验。 make_model
的代码在此响应的结尾。
首先,让我们看一下使用清晰会话时的培训时间。我们将运行此实验10次,打印结果
non_seq_time = [ make_model(clear_session=True) for _ in range(10)]
non sequential
Elapse = 1.06039
Elapse = 1.20795
Elapse = 1.04357
Elapse = 1.03374
Elapse = 1.02445
Elapse = 1.00673
Elapse = 1.01712
Elapse = 1.021
Elapse = 1.17026
Elapse = 1.04961
如您所见,培训时间保持恒定
现在让我们在不使用明确会话的情况下重新运行实验并查看培训时间
non_seq_time = [ make_model(clear_session=False) for _ in range(10)]
non sequential
Elapse = 1.10954
Elapse = 1.13042
Elapse = 1.12863
Elapse = 1.1772
Elapse = 1.2013
Elapse = 1.31054
Elapse = 1.27734
Elapse = 1.32465
Elapse = 1.32387
Elapse = 1.33252
如您所见,没有clear_session,训练时间就会增加
# Training time increases - and how to fix it
# Setup and imports
# %tensorflow_version 2.x
import tensorflow as tf
import tensorflow.keras.layers as layers
import tensorflow.keras.models as models
from time import time
# if you comment this out, the problem doesn't happen
# it only happens when eager execution is disabled !!
tf.compat.v1.disable_eager_execution()
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
# Let's build that network
def make_model(activation="relu", hidden=2, units=100, clear_session=False):
# -----------------------------------
# . HERE WE CAN TOGGLE CLEAR SESSION
# -----------------------------------
if clear_session:
tf.keras.backend.clear_session()
start = time()
inputs = layers.Input(shape=[784])
x = inputs
for num in range(hidden) :
x = layers.Dense(units=units, activation=activation)(x)
outputs = layers.Dense(units=10, activation="softmax")(x)
model = tf.keras.Model(inputs=inputs, outputs=outputs)
model.compile(optimizer='sgd', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
results = model.fit(x_train, y_train, validation_data=(x_test, y_test), batch_size=200, verbose=0)
elapse = time()-start
print(f"Elapse = {elapse:8.6}")
return elapse
# Let's try it out and time it
# prime it first
make_model()
print("Use clear session")
non_seq_time = [ make_model(clear_session=True) for _ in range(10)]
print("Don't use clear session")
non_seq_time = [ make_model(clear_session=False) for _ in range(10)]