在训练过程中使用tf.estimator API监控验证指标

时间:2018-09-07 16:42:17

标签: python tensorflow machine-learning tensorflow-estimator

说我想写一个机器学习来解决监督分类问题。我得到了X (n_samples, n_features)y (n_samples, n_labels)的数据(为我的数据集中的每个样本获得的特征)和相应的标签。为了训练模型,我将X(分别为y)拆分为X_trainX_valX_test(分别为y_train,{ {1}}和y_val)。然后,我将模型拟合到y_test上,并使用(X_train, y_train)监视一些指标(损耗,准确性...),这有助于避免过度拟合。训练完模型后,我将在(X_val, y_val)上对其进行评估。我相信这是古典的。

custom estimators with Tensorflow的教程中,我找到了一些代码,这些代码建议使用著名的Iris数据集并实现具有3个隐藏层的密集神经网络(DNN)。完整的代码如下。

模型代码:

(X_test, y_test)

加载虹膜数据的代码为:

from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

import argparse
import tensorflow as tf
import .iris_data

parser = argparse.ArgumentParser()
parser.add_argument('--batch_size', default=100, type=int, help='batch size')
parser.add_argument('--train_steps', default=2000, type=int, help='number of training steps')

def my_model(features, labels, mode, params):
    """DNN with three hidden layers, and dropout of 0.1 probability."""
    # Create three fully connected layers each layer having a dropout
    # probability of 0.1.
    net = tf.feature_column.input_layer(features, params['feature_columns'])
    for units in params['hidden_units']:
        net = tf.layers.dense(net, units=units, activation=tf.nn.relu)

    # Compute logits (1 per class).
    logits = tf.layers.dense(net, params['n_classes'], activation=None)

    # Compute predictions.
    predicted_classes = tf.argmax(logits, 1)
    if mode == tf.estimator.ModeKeys.PREDICT:
        predictions = {
            'class_ids': predicted_classes[:, tf.newaxis],
            'probabilities': tf.nn.softmax(logits),
            'logits': logits,
        }
        return tf.estimator.EstimatorSpec(mode, predictions=predictions)

    # Compute loss.
    loss = tf.losses.sparse_softmax_cross_entropy(labels=labels, logits=logits)

    # Compute evaluation metrics.
    accuracy = tf.metrics.accuracy(labels=labels, predictions=predicted_classes, name='acc_op')
    metrics = {'accuracy': accuracy}
    tf.summary.scalar('accuracy', accuracy[1])

    if mode == tf.estimator.ModeKeys.EVAL:
        return tf.estimator.EstimatorSpec(mode, loss=loss, eval_metric_ops=metrics)

    # Create training op.
    assert mode == tf.estimator.ModeKeys.TRAIN

    optimizer = tf.train.AdagradOptimizer(learning_rate=0.1)
    train_op = optimizer.minimize(loss, global_step=tf.train.get_global_step())
    return tf.estimator.EstimatorSpec(mode, loss=loss, train_op=train_op)

def main(argv):
    args = parser.parse_args(argv[1:])

    # Fetch the data
    (train_x, train_y), (val_x, val_y), (test_x, test_y) = iris_data.load_data()

    # Feature columns describe how to use the input.
    my_feature_columns = []
    for key in train_x.keys():
        my_feature_columns.append(tf.feature_column.numeric_column(key=key))

    # Build 2 hidden layer DNN with 10, 10 units respectively.
    classifier = tf.estimator.Estimator(
    model_fn=my_model,
    params={
        'feature_columns': my_feature_columns,
        # Two hidden layers of 10 nodes each.
        'hidden_units': [10, 10],
        # The model must choose between 3 classes.
        'n_classes': 3,
    })

    # Train the model

    early_stopping = stop_if_no_decrease_hook(classifier, metric_name='validation_accuracy', max_steps_without_decrease=10, min_steps=2)

    results = tf.estimator.train_and_evaluate(
        classifier,
        train_spec=tf.estimator.TrainSpec(input_fn=lambda: iris_data.train_input_fn(train_x, train_y, args.batch_size), max_steps=2000, hooks=[early_stopping]),
        eval_spec=tf.estimator.EvalSpec(input_fn=lambda: iris_data.eval_input_fn(val_x, val_y, args.batch_size), throttle_secs=2, start_delay_secs=2), 
    )

    # Generate predictions from the model
    predictions = classifier.predict(
        input_fn=lambda: iris_data.eval_input_fn(test_x, test_y,
                                                batch_size=args.batch_size))

    # Remains to evaluate predictions on test_x...

if __name__ == '__main__':
    tf.logging.set_verbosity(tf.logging.INFO)
    tf.app.run(main)

给出这个例子,我的问题是:

  • 如何监视import pandas as pd import tensorflow as tf from sklearn.model_selection import train_test_split TRAIN_URL = "http://download.tensorflow.org/data/iris_training.csv" TEST_URL = "http://download.tensorflow.org/data/iris_test.csv" CSV_COLUMN_NAMES = ['SepalLength', 'SepalWidth', 'PetalLength', 'PetalWidth', 'Species'] SPECIES = ['Setosa', 'Versicolor', 'Virginica'] def maybe_download(): train_path = tf.keras.utils.get_file(TRAIN_URL.split('/')[-1], TRAIN_URL) test_path = tf.keras.utils.get_file(TEST_URL.split('/')[-1], TEST_URL) return train_path, test_path def load_data(y_name='Species'): """Returns the iris dataset as (train_x, train_y), (test_x, test_y).""" train_path, test_path = maybe_download() train = pd.read_csv(train_path, names=CSV_COLUMN_NAMES, header=0) aux_x, aux_y = train, train.pop(y_name) # split aux_x, aux_y into training / validation data train_x, val_x, train_y, val_y = train_test_split(aux_x, aux_y, test_size=0.5) test = pd.read_csv(test_path, names=CSV_COLUMN_NAMES, header=0) test_x, test_y = test, test.pop(y_name) return (train_x, train_y), (val_x, val_y), (test_x, test_y) def train_input_fn(features, labels, batch_size): """An input function for training""" # Convert the inputs to a Dataset. dataset = tf.data.Dataset.from_tensor_slices((dict(features), labels)) # Shuffle, repeat, and batch the examples. dataset = dataset.shuffle(1000).repeat().batch(batch_size) # Return the dataset. return dataset def eval_input_fn(features, labels, batch_size): """An input function for evaluation or prediction""" features = dict(features) if labels is None: # No labels, use only features. inputs = features else: inputs = (features, labels) # Convert the inputs to a Dataset. dataset = tf.data.Dataset.from_tensor_slices(inputs) # Batch the examples assert batch_size is not None, "batch_size must not be None" dataset = dataset.batch(batch_size) # Return the dataset. return dataset train_x(训练和验证数据)的准确性? 据我了解,如果我想在训练模型时同时在控制台中输出,我应该使用LoggingTensorHook,其中一个钩子用于训练精度,另一个钩子用于验证精度...如果我写

val_x

logging_hook = tf.train.LoggingTensorHook({"loss": loss, "accuracy": accuracy[1]}, every_n_iter=10)

并在my_model中添加training_hooks=[logging_hook](由函数返回),这使我可以在训练时每10步监视一次训练的准确性。不过,我不确定如何获得此验证的准确性。当模式为tf.estimator.EstimatorSpec时,是否应该以相同的方式定义另一个日志记录挂钩?

  • 如何根据此验证指标(在训练过程中计算出)定义提前停止规则?

再次,据我了解,我应该像这样使用tf.estimator.train_and_evaluatetf.contrib.estimator.stop_if_no_decrease_hook

EVAL

但是,我又该如何定义 early_stopping = stop_if_no_decrease_hook( ldcrf_classifier, metric_name='loss', max_steps_without_decrease=10, min_steps=2)

0 个答案:

没有答案