TensorFlow 2.0:在自定义训练循环中显示进度条

时间:2019-09-17 09:11:28

标签: python tensorflow progress-bar

我正在训练CNN进行音频分类任务,并且我正在使用带有自定义训练循环的TensorFlow 2.0 RC(如其官方网站的this guide中所述)。与通常的Keras model.fit类似,我觉得拥有一个不错的进度条非常方便。

这是我的训练代码的概述(我使用的是4个GPU,并采用镜像分配策略):

strategy = distribute.MirroredStrategy()

distr_train_dataset = strategy.experimental_distribute_dataset(train_dataset)

if valid_dataset:
    distr_valid_dataset = strategy.experimental_distribute_dataset(valid_dataset)

with strategy.scope():

    model = build_model() # build the model

    optimizer = # define optimizer
    train_loss = # define training loss
    train_metrics_1 = # AUC-ROC
    train_metrics_2 = # AUC-PR
    valid_metrics_1 = # AUC-ROC for validation
    valid_metrics_2 = # AUC-PR for validation

    # rescale loss
    def compute_loss(labels, predictions):
        per_example_loss = train_loss(labels, predictions)
        return per_example_loss/config.batch_size

    def train_step(batch):
        audio_batch, label_batch = batch
        with tf.GradientTape() as tape:
            logits = model(audio_batch)
            loss = compute_loss(label_batch, logits)
        variables = model.trainable_variables
        grads = tape.gradient(loss, variables)
        optimizer.apply_gradients(zip(grads, variables))

        train_metrics_1.update_state(label_batch, logits)
        train_metrics_2.update_state(label_batch, logits)
        train_mean_loss.update_state(loss)
        return loss

    def valid_step(batch):
        audio_batch, label_batch = batch
        logits = model(audio_batch, training=False)
        loss = compute_loss(label_batch, logits)

        val_metrics_1.update_state(label_batch, logits)
        val_metrics_2.update_state(label_batch, logits)
        val_loss.update_state(loss)
        return loss

    @tf.function 
    def distributed_train(batch):
        num_batches = 0
        for batch in distr_train_dataset:
            num_batches += 1
            strategy.experimental_run_v2(train_step, args=(batch, ))
            # print progress here
            tf.print('Step', num_batches, '; Loss', train_mean_loss.result(), '; ROC_AUC', train_metrics_1.result(), '; PR_AUC', train_metrics_2.result())
            gc.collect()

    @tf.function
    def distributed_valid(batch):
        for batch in distr_valid_dataset:
            strategy.experimental_run_v2(valid_step, args=(batch, ))
            gc.collect()

for epoch in range(epochs):
    distributed_train(distr_train_dataset)
    gc.collect()
    train_metrics_1.reset_states()
    train_metrics_2.reset_states()
    train_mean_loss.reset_states()

    if valid_dataset:
        distributed_valid(distr_valid_dataset)
        gc.collect()
        val_metrics_1.reset_states()
        val_metrics_2.reset_states()
        val_loss.reset_states()

此处train_datasetvalid_dataset是使用常规tf.data输入管道生成的两个tf.data.TFRecordDataset。

TensorFlow提供了一个非常不错的tf.keras.utils.Progbar(确实是使用model.fit进行培训时看到的内容)。我已经看过它的source code,它依赖于numpy,所以我不能用它代替tf.print()语句(在图形模式下执行)。

如何在自定义训练循环中实现类似的进度条(训练功能以图形方式运行)?

model.fit首先如何显示进度栏?

3 个答案:

答案 0 :(得分:4)

可以使用以下代码生成自定义训练循环的进度条:

from tensorflow.keras.utils import Progbar
import time 
import numpy as np

metrics_names = ['acc','pr'] 

num_epochs = 5
num_training_samples = 100
batch_size = 10

for i in range(num_epochs):
    print("\nepoch {}/{}".format(i+1,num_epochs))

    pb_i = Progbar(num_training_samples, stateful_metrics=metrics_names)

    for j in range(num_training_samples//batch_size):

        time.sleep(0.3)

        values=[('acc',np.random.random(1)), ('pr',np.random.random(1))]

        pb_i.add(batch_size, values=values)

输出:

时代1/5

100/100 [=============================]-3s 30ms / step-acc:0.2169-pr: 0.9011

时代2/5

100/100 [==============================]-3s 30ms / step-acc:0.7815-pr: 0.4900

时代3/5

100/100 [==============================]-3s 30ms / step-acc:0.8003-pr: 0.9292

时代4/5

100/100 [==============================]-3s 30ms / step-acc:0.8280-pr: 0.9113

时代5/5

100/100 [==============================]-3s 30ms / step-acc:0.8497-pr: 0.1929

答案 1 :(得分:2)

@Shubham Malaviya的答案很完美。

我只想在与tf.data.Dataset进行交互时进一步扩展它。此代码也基于此answer

import tensorflow as tf
import numpy as np
import time 

# From https://www.tensorflow.org/guide/data#reading_input_data
(images_train, labels_train), (images_test, labels_test) = tf.keras.datasets.fashion_mnist.load_data()

images_train = images_train/255
images_test = images_test/255

dataset_train = tf.data.Dataset.from_tensor_slices((images_train, labels_train))
dataset_test = tf.data.Dataset.from_tensor_slices((images_test, labels_test))

# From @Shubham Malaviya https://stackoverflow.com/a/60094207/8682939
metrics_names = ['train_loss','val_loss'] 
num_epochs = 2
num_training_samples = images_train.shape[0]
batch_size = 10

# Loop on each epoch
for epoch in range(num_epochs):

  print("\nepoch {}/{}".format(epoch+1,num_epochs))

  progBar = tf.keras.utils.Progbar(num_training_samples, stateful_metrics=metrics_names)

  # Loop on each batch of train dataset
  for idX, (batch_x, batch_y) in enumerate(dataset_train.batch(batch_size)): 

    # Train the model
    train_loss = np.random.random(1)

    values=[('train_loss',train_loss)]

    progBar.update(idX*batch_size, values=values) 


  # Loop on each batch of test dataset for validation
  for batch_x, batch_y in dataset_test.batch(batch_size):

    # Foward image through the network
    # -----
    # Calc the loss
    val_loss = np.random.random(1)


  # Update progBar with val_loss
  values=[('train_loss',train_loss),('val_loss',val_loss)]

  progBar.update(num_training_samples, values=values, finalize=True)

输出:

epoch 1/2 60000/60000 [==============================] - 1s 22us/step
- train_loss: 0.7019 - val_loss: 0.0658

epoch 2/2 60000/60000 [==============================] - 1s 21us/step
- train_loss: 0.5561 - val_loss: 0.0324

答案 2 :(得分:0)

  

如何在自定义训练循环中实现类似的进度条(训练功能以图形方式运行)?

为什么不更改代码的结构,以便将单个RSpec.describe AdminLogData::CsvAdminLogGenerator do subject(:csv_file) { described_class.new(start_date, end_date).call } let(:start_date) { 3.months.ago } let(:end_date) { 2.months.ago } let(:admin_log_data) { FactoryBot.create(:admin_log_data, created_at: 3.months.ago) } before { admin_log_data } it 'creates CSV file with proper value' do expect(csv_file.to_a[1]).to match_array(CSV.generate_line([ admin_log_data.created_at admin_log_data.action_type admin_log_data.admin_email admin_log_data.new_data ])) end end 调用封装在装饰有strategy.experimental_run_v2的函数中,并让它们返回要显示的指标,然后在非装饰的tf.function循环并使用for

  

tf.keras.utils.Progbar首先如何显示进度栏?

在第2版中,model.fit通过使用model.fit对象显示进度条,该对象封装了TrainingContext以及其他指定的回调,这些回调由{{1} },Progbar等处理日志的方法。老实说,我不太确定如何在自定义训练循环中实现类似的机制,但是可能值得调查默认来源,其来源是here