计算每次迭代和时间Tensorflow的损失(MSE)

时间:2020-07-28 20:09:07

标签: python tensorflow machine-learning neural-network tensorboard

我想使用Tensorboard绘制给定时间范围(x轴)(例如5分钟)中每次迭代的均方误差(y轴)。

但是,我只能按照每个时期绘制MSE,并在5分钟后设置回调。但是,这不能解决我的问题。

我尝试在互联网上寻找一些解决方案,以解决在进行model.fit时如何设置最大迭代次数而不是时期的问题,但是没有运气。我知道迭代是完成一个纪元所需的批处理数量,但是当我要调整batch_size时,我更喜欢使用迭代。

我的代码当前如下所示:

input_size = len(train_dataset.keys())
output_size = 10
hidden_layer_size = 250
n_epochs = 3

weights_initializer = keras.initializers.GlorotUniform()

#A function that trains and validates the model and returns the MSE
def train_val_model(run_dir, hparams):
    model = keras.models.Sequential([
            #Layer to be used as an entry point into a Network
            keras.layers.InputLayer(input_shape=[len(train_dataset.keys())]),
            #Dense layer 1
            keras.layers.Dense(hidden_layer_size, activation='relu', 
                               kernel_initializer = weights_initializer,
                               name='Layer_1'),
            #Dense layer 2
            keras.layers.Dense(hidden_layer_size, activation='relu', 
                               kernel_initializer = weights_initializer,
                               name='Layer_2'),
            #activation function is linear since we are doing regression
            keras.layers.Dense(output_size, activation='linear', name='Output_layer')
                                ])
    
    #Use the stochastic gradient descent optimizer but change batch_size to get BSG, SGD or MiniSGD
    optimizer = tf.keras.optimizers.SGD(learning_rate=0.001, momentum=0.0,
                                        nesterov=False)
    
    #Compiling the model
    model.compile(optimizer=optimizer, 
                  loss='mean_squared_error', #Computes the mean of squares of errors between labels and predictions
                  metrics=['mean_squared_error']) #Computes the mean squared error between y_true and y_pred
    
    # initialize TimeStopping callback 
    time_stopping_callback = tfa.callbacks.TimeStopping(seconds=5*60, verbose=1)
    
    #Training the network
    history = model.fit(normed_train_data, train_labels, 
         epochs=n_epochs,
         batch_size=hparams['batch_size'], 
         verbose=1,
         #validation_split=0.2,
         callbacks=[tf.keras.callbacks.TensorBoard(run_dir + "/Keras"), time_stopping_callback])
    
    return history

#train_val_model("logs/sample", {'batch_size': len(normed_train_data)})
train_val_model("logs/sample1", {'batch_size': 1})
%tensorboard --logdir_spec=BSG:logs/sample,SGD:logs/sample1

导致:

x-axis: epochs, y-axis: MSE

所需的输出应如下所示:

x-axis: minutes, y-axis: MSE

2 个答案:

答案 0 :(得分:0)

每次迭代都不能执行的原因是,损失是在每个时期结束时计算的。如果要调整批次大小,请运行一定数量的时间段并进行评估。从16开始,以2的幂次增加,然后查看可以提高网络功率的多少。但是,通常说大批量可以提高性能,但是仅仅关注它并不那么重要。首先关注网络中的其他事物。

答案 1 :(得分:0)

答案实际上很简单。

tf.keras.callbacks.TensorBoard具有update_freq参数,使您可以控制何时将损失和度量值写入张量板。标准是时代,但如果要每n个批次写入张量板,则可以将其更改为batch或整数。有关更多信息,请参见文档:https://www.tensorflow.org/api_docs/python/tf/keras/callbacks/TensorBoard