如何在Tensorboard中可视化指标回调?

时间:2019-09-25 15:36:23

标签: keras tensorboard

我在keras中有一个模型,在其中我将自定义指标用作:

class MyMetrics(keras.callbacks.Callback):
    def __init__(self):
        initial_value = 0
    def on_train_begin(self, logs={}):
        ...
    def on_epoch_end(self, batch, logs={}):
        here I calculate my important values

现在,有一种方法可以在Tensorboard中可视化它们吗? 例如,如果我的指标是:

def mymetric(y_true,y_pred):
    return myImportantValues

我可以通过Tensorboard将它们可视化 mymodel.compile(..., metrics = mymetric)

指标回调是否有相似之处? 我试图在MyMetric类中创建一个函数,并将其传递给mymodel.compile,但它不会更新值。

2 个答案:

答案 0 :(得分:0)

您可以使用自定义指标创建事件文件,并在tensorboard中直接对其进行可视化。

这适用于Tensorflow 2.0。在此示例中,从训练历史记录了准确性/度量。就您而言,您可以通过on_epoch_end回调来实现。

import datetime
current_time = datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
train_log_dir = 'logs/train/' + current_time
train_summary_writer = tf.summary.create_file_writer(train_log_dir)

history = model.fit(x=X, y=y, epochs=100, verbose=1)
for epoch in range(len(history.history['accuracy'])):
    with train_summary_writer.as_default():
        tf.summary.scalar('loss', history.history['loss'][epoch], step=epoch)
        tf.summary.scalar('accuracy', history.history['accuracy'][epoch], step=epoch)

执行脚本后,

  

tensorboard --logdir日志/培训

https://www.tensorflow.org/tensorboard/r2/get_started#using_tensorboard_with_other_methods

答案 1 :(得分:0)

您需要先创建一个自定义回调:

class CustomLogs(Callback):
    def __init__(self, validation_data=()):
        super(Callback, self).__init__()
        self.X_val, self.y_val = validation_data
        
    def on_train_begin(self, logs={}):        
    ## on begin of training, we are creating a instance f1_scores
        self.model.f1_scores = []

    def on_epoch_end(self, epoch, logs={}):
        # calculating micro_avg_f1_score 
        val_predict_proba = np.array(self.model.predict(self.X_val))
        val_predict = np.round(val_predict_proba)
        val_targ = self.y_val
         #using scikit-learn f1_score 
        f1 = f1_score(val_targ, val_predict, average='micro')
         #appending f1_scores for every epoch
        self.model.f1_scores.append(f1)
        print('micro_f1_score: ',f1)

#initilize your call back with validation data
customLogs = CustomLogs(validation_data=(X_test, Y_test)) 

#not change in commpile method 
model.compile(optimizer='Adam',loss='CategoricalCrossentropy')

#pass customLogs and validation_data in fit method
model.fit(X_train,
          Y_train,
          batch_size=32,
          validation_data=(X_test, Y_test),
          callbacks=[customLogs],
          epochs=20)

#After fit method  accessing the f1_scores
f1_scores = model.f1_scores
# writing the summary in tensorboard
log_dir='/log'
writer=tf.summary.create_file_writer(log_dir)
for idx in range(len(f1_scores)):
    with writer.as_default(step=idx+1):
        tf.summary.scalar('f1_scores', f1_scores[idx])
writer.flush ()

现在启动:tensorboard --logdir /log。 您可以在 tesorboard scalers 中看到 f1_scores 的图