如何使用Tensorflow计算AUC并生成RNN和LSTM模型的ROC曲线?

时间:2018-04-27 08:34:34

标签: python tensorflow deep-learning lstm rnn

我正在使用客户预定义函数trainDNN

运行RNN和LSTM模型
import tensorflow as tf
from tensorflow.contrib.layers import fully_connected
import h5py
import time
from sklearn.utils import shuffle
def trainDNN(path, n_days, n_features, n_neurons, 
            train_sequences, train_lengths, train_y,
            test_sequences, test_y, test_lengths,
            lstm=False, n_epochs=50, batch_size=256,
            learning_rate=0.0003, TRAIN_REC=8, TEST_REC=8):
    # we're doing binary classification
    n_outputs = 2

    # this is the initial learning rate
    # adam optimzer decays the learning rate automatically
#     learning_rate = 0.0001
    #learning rate decay is determined by epsilon
    epsilon = 0.001

    # setup the graph
    tf.reset_default_graph()

    # inputs to the network
    X = tf.placeholder(tf.float32, [None, n_days, n_features])
    y = tf.placeholder(tf.int32, [None])
    seq_length = tf.placeholder(tf.int32, [None])

    # the network itself
    cell = tf.contrib.rnn.BasicLSTMCell(num_units=n_neurons) if lstm else tf.contrib.rnn.BasicRNNCell(num_units=n_neurons)
    outputs, states = tf.nn.dynamic_rnn(cell, X, dtype=tf.float32, sequence_length=seq_length)
    logits = fully_connected(states[-1] if lstm else states, n_outputs)

    # the training process (minimize loss) including the training operatin itself
    xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=logits)
    loss = tf.reduce_mean(xentropy)
    optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate, epsilon=epsilon)
    training_op = optimizer.minimize(loss)

    # hold onto the accuracy for the logwriter
    correct = tf.nn.in_top_k(logits, y, 1)
    accuracy = tf.reduce_mean(tf.cast(correct, tf.float32))

    # this saves the network for later querying
    # currently only saves after all epochs are complete
    # but we could for example save checkpoints on a
    # regular basis
    saver = tf.train.Saver()

    # this is where we save the log files for tensorboard
    now = int(time.time())
    name = 'lstm' if lstm else 'rnn'
    root_logdir = path+"tensorflow_logs/{}/{}-{}/".format(name.upper(), name, now)
    train_logdir = "{}train".format(root_logdir)
    eval_logdir = "{}eval".format(root_logdir)
    print('train_logdir', train_logdir)
    print('eval_logdir', eval_logdir)

    # scalars that are written to the log files
    loss_summary = tf.summary.scalar('loss', loss)
    acc_summary = tf.summary.scalar('accuracy', accuracy)

    # summary operation and writer for the training data
    train_summary_op = tf.summary.merge([loss_summary, acc_summary])
    train_writer = tf.summary.FileWriter(train_logdir, tf.get_default_graph())
    # summary operation and writer for the validation data
    eval_summary_op = tf.summary.merge([loss_summary, acc_summary])
    eval_writer = tf.summary.FileWriter(eval_logdir, tf.get_default_graph())

    # initialize variables
    init = tf.global_variables_initializer()
    n_batches = len(train_sequences) // batch_size
    print(n_batches, 'batches of size', batch_size, n_epochs, 'epochs,', n_neurons, 'neurons')

    with tf.Session() as sess:
        # actually run the initialization
        init.run()
        start_time = time.time()
        for epoch in range(n_epochs):
            # at the beginning of each epoch, shuffle the training data
            train_sequences, train_y, train_lengths = shuffle(train_sequences, train_y, train_lengths)
            for iteration in range(n_batches):

                # extract the batch of training data for this iteration
                start = iteration*batch_size
                end = start+batch_size
                X_batch = train_sequences[start:end]
                y_batch = train_y[start:end]
                y_batch = y_batch.ravel()
                seq_length_batch = train_lengths[start:end]

                # every TRAIN_REC steps, save a summary of training accuracy & loss
                if iteration % TRAIN_REC == 0:
                    train_summary_str = train_summary_op.eval(
                        feed_dict = {X: X_batch, y: y_batch, seq_length: seq_length_batch}
                    )
                    step = epoch * n_batches + iteration
                    train_writer.add_summary(train_summary_str, step)
                    # without this flush, tensorboard isn't always current
                    train_writer.flush()

                # every TEST_REC steps, save a summary of validation accuracy & loss
                # TODO: this runs all validation data at once. if validation is
                # sufficiently large, this will fail. better would be to either
                # pick a random subset of validation data, or even better, run
                # validation in multiple batches and save the validation accuracy 
                # & loss based on the aggregation of all of the validation batches.
                if iteration % TEST_REC == 0:
                    summary_str = eval_summary_op.eval(
                        feed_dict = {X: test_sequences, y: test_y.ravel(), seq_length: test_lengths}
                    )
                    step = epoch * n_batches + iteration
                    eval_writer.add_summary(summary_str, step)
                    # without this flush, tensorboard isn't always current
                    eval_writer.flush()

                # run training.
                # this is where the network learns.
                sess.run(
                    training_op,
                    feed_dict = {X: X_batch, y: y_batch, seq_length: seq_length_batch}
                )

            # after every epoch, calculate the accuracy of the last seen training batch 
            acc_train = accuracy.eval(
                feed_dict = {X: X_batch, y: y_batch, seq_length: seq_length_batch}
            )
            # after each epoch, calculate the accuracy of the test data
            acc_test = accuracy.eval(
                feed_dict = {X: test_sequences, y: test_y.ravel(), seq_length: test_lengths}
            )

            # print the training & validation accuracy to the console
            print(epoch, time.strftime('%m/%d %H:%M:%S'), "Accuracy train:", acc_train, "test:", acc_test)


        # save the model (for more training or inference) after all
        # training is complete
        save_path = saver.save(sess, root_logdir+"model_final.ckpt")

        # close the writers
        train_writer.close()
        eval_writer.close()    
        log(["{}-{} model score".format(name.upper(), now), percent(acc_test)])

上述功能在时间序列数据上训练RNN和LSTM模型,并输出二进制分类得分。打印列车和测试分数,但我想弄清楚如何计算AUC并生成RNN和LSTM二进制分类的ROC曲线。

更新

我使用以下脚本评估了logits和预测:

n_epochs = 2
batch_size = 2000
n_batches = len(train_sequences) // batch_size
print(n_batches)
with tf.Session() as sess:
    init.run()
    #sess.run( tf.local_variables_initializer() )
    for epoch in range(n_epochs):
        train_sequences, train_y, train_lengths = shuffle(train_sequences, train_y, train_lengths)
        for iteration in range(n_batches):
            start = iteration*batch_size
            end = start+batch_size
            X_batch = train_sequences[start:end]
            y_batch = train_y[start:end]
            seq_length_batch = train_lengths[start:end]
            if iteration % 20 == 0:
                train_summary_str = train_summary_op.eval(
                    feed_dict = {X: X_batch, y: y_batch, seq_length: seq_length_batch}
                )
                step = epoch * n_batches + iteration
            if iteration % 200 == 0:
                summary_str = eval_summary_op.eval(
                    feed_dict = {X: test_sequences, y: test_y, seq_length: test_lengths}
                )
                step = epoch * n_batches + iteration
            sess.run(
                training_op,
                feed_dict = {X: X_batch, y: y_batch, seq_length: seq_length_batch}
            )

        acc_train = accuracy.eval(
            feed_dict = {X: X_batch, y: y_batch, seq_length: seq_length_batch}
        )
        acc_test = accuracy.eval(
            feed_dict = {X: test_sequences, y: test_y, seq_length: test_lengths}
        )
        probs = logits.eval(feed_dict = {X: test_sequences, y: test_y, seq_length: test_lengths})
        predictions = correct.eval(feed_dict = {logits:probs, y: test_y})
        print(epoch, "Train accuracy:", acc_train, "Test accuracy:", acc_test)# "Manual score:", score)

这将返回probs,它基本上是一个矩阵,行数等于测试用例数,2列包含2个二进制类中每个类的概率。预测对象包含预测是否正确。 我持怀疑态度,因为ReLU功能概率分数不像sigmoid功能分数那样直观,因为它不再基于正面和负面预测的默认0.5截止值。相反,预测是基于哪个类具有更多概率。是否真的可以从ReLu输出生成ROC曲线?

1 个答案:

答案 0 :(得分:2)

您可以使用tf.metrics.auc()来实现此目的。请注意,您需要单热编码标签及其预测,如果您尝试通过多个update_op命令累积AUC,还需要运行它返回的sess.run(),请参阅下面的单独部分。

在您的代码中,您使用tf.one_hot()创建y_one_hot,并且您可以在accuracy之后将所有这些设置为正确:

y_one_hot = tf.one_hot( y, n_outputs )
auc, auc_update_op = tf.metrics.auc( y_one_hot, logits )

在开始训练循环之前,你需要初始化auc创建的局部变量,也许就在init.run()之后:

sess.run( tf.initialize_local_variables() )

然后当您运行准确性时,还需要在auc中使用accuracy而不是sess.run()这样运行.eval()(未经测试):

# after every epoch, calculate the accuracy of the last seen training batch 
acc_train, auc_val = sess.run( [ accuracy, auc ],
    feed_dict = {X: X_batch, y: y_batch, seq_length: seq_length_batch}
)
# after each epoch, calculate the accuracy of the test data
acc_test, auc_val = sess.run( [ accuracy, auc ],
    feed_dict = {X: test_sequences, y: test_y.ravel(), seq_length: test_lengths}
)

多批次累积

如果您确实想使用tf.metrics.auc()的累积功能,那么一旦您想开始新的计算,您还需要注意重置累积。为此,您需要收集创建的局部变量。所以创建这样的auc:

with tf.variable_scope( "AUC" ):
    auc, auc_update_op = tf.metrics.auc( predictions=y_pred, labels=y_true, curve = 'ROC' )
auc_variables = [ v for v in tf.local_variables() if v.name.startswith( "AUC" ) ]
auc_reset_op = tf.initialize_variables( auc_variables )

当你完成累积时,重置auc的内部变量,如下所示:

session.run( auc_reset_op )

每次运行auc_update_op时,您还需要确保运行sess.run()

# after every epoch, calculate the accuracy of the last seen training batch 
acc_train, auc_val, _ = sess.run( [ accuracy, auc, auc_update_op ],
    feed_dict = {X: X_batch, y: y_batch, seq_length: seq_length_batch}
)
session.run( auc_reset_op ) # maybe you want to do this here...
# after each epoch, calculate the accuracy of the test data
acc_test, auc_val, _ = sess.run( [ accuracy, auc, auc_update_op ],
    feed_dict = {X: test_sequences, y: test_y.ravel(), seq_length: test_lengths}
)