我可以在model_fn,Estimator,Tensorflow中使用python进行循环吗?

时间:2018-07-04 15:25:41

标签: python tensorflow deep-learning

对于tensorflow估计器,我还是一个新手,它试图“通过ConvNet处理视频的每一帧,总结重建损失,然后优化参数”。

所以我想知道是否可以在model_fn中为估算器编写 for循环,以便我可以处理视频的每一帧,然后一起进行优化。

谢谢

P.S。我附上了自己的实现的两个摘要,它们都起作用。即使我将 cnn_model 嵌入到 model_fn 中,这似乎也为估计器定义了model_fn允许循环。

Vanilla Tensorflow实现:

import os, sys
import tensorflow as tf

# Read in video dataset in [batch, frames, height, width]
raw_data = np.load(Data_root)
dataset = tf.data.Dataset.from_tensor_slices((raw_data))
dataset = dataset.batch(BATCH_SIZE)
iterator = dataset.make_initializable_iterator()
one_element = iterator.get_next()

# Set up placeholder for each frame
frame = tf.placeholder('float32', [BATCH_SIZE, IMAGE_HEIGHT, IAMGE_WIDTH])
label = tf.placeholder('float32', [BATCH_SIZE, IMAGE_HEIGHT, IAMGE_WIDTH])

# Define network
with tf.name_scope("network"):

    with tf.name_scope("Encoder"):
        conv1 = tf.layers.conv2d(frames, 32, [3,3], strides=2, padding='same', activation=tf.nn.relu)
        conv2 = tf.layers.conv2d(conv1, 64, [3,3], strides=2, padding='same', activation=tf.nn.relu)

    with tf.name_scope("Repeat_Layer"):
        latent = tf.layers.conv2d(conv2, 64, [3,3], strides=2, padding='same', activation=tf.nn.relu)

    with tf.name_scope("Decoder"):
        conv3 = tf.layers.conv2d_transpose(latent, 32, [3, 3], strides=2, padding='same', activation=tf.nn.relu)
        conv4 = tf.layers.conv2d_transpose(conv3, 1, [3, 3], strides=2, padding='same', activation=tf.nn.relu)

prediction = tf.identity(conv4, name='prediciton')

# Define loss
loss_mse = tf.losses.mean_squared_error(frame, prediction)

# Define optim
optimizer = tf.train.RMSPropOptimizer(0.001).minimize(loss_total)

# Init
init_op = tf.group(tf.global_variables_initializer(), tf.local_variables_initializer())

with tf.Session as sess:

    sess.run(init_op)
    coord = tf.train.Coordinator()
    threads = tf.train.start_queue_runners(sess=sess, coord=coord)

    # Assume train_batch with shape [batch, 5, height, width]
    train_batch, label_batch = sess.run([one_element])
    loss_total = 0.0

    for i in range(5):

        feed_dict = {frame:train_batch[:, i, :, :], label:label_batch[0][:, i, :, :]}
        loss = sess.run([loss_mse], feed_dict=feed_dict)
        loss_total += loss

    feed_dict = {loss_total:loss_total}
    _ = sess.run([optimizer], feed_dict=feed_dict)

print("Optimization is Finished!")
coord.request_stop()
coord.join(threads)
sess.close()

估算器实现

import os, sys
import tensorflow as tf
import numpy as np

def cnn_model(input_feature):

        with tf.name_scope("Encoder"):
            conv1 = tf.layers.conv2d(frames, 32, [3,3], strides=2, padding='same', activation=tf.nn.relu)
            conv2 = tf.layers.conv2d(conv1, 64, [3,3], strides=2, padding='same', activation=tf.nn.relu)

        with tf.name_scope("Repeat_Layer"):
            latent = tf.layers.conv2d(conv2, 64, [3,3], strides=2, padding='same', activation=tf.nn.relu)

        with tf.name_scope("Decoder"):
            conv3 = tf.layers.conv2d_transpose(latent, 32, [3, 3], strides=2, padding='same', activation=tf.nn.relu)
            conv4 = tf.layers.conv2d_transpose(conv3, 1, [3, 3], strides=2, padding='same', activation=tf.nn.relu)

    return conv4

def model_fn(features, labels, mode):

    # Assume each video contains five frames
    input_feature = tf.reshape(features, [batch, 5, height, width])
    loss_total = 0.0

    for i in range(5):

        input_layer = input_feature[:, i, :, :]

        prediction = cnn_model(input_layer)

        loss_mse = tf.losses.mean_squared_error(labels=labels, predictions=prediction)

        loss_total += loss_mse

    if mode == tf.estimator.ModeKeys.TRAIN:

        optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.001)
        train_op = optimizer.minimize(
            loss=loss_mse,
            global_step=tf.train.get_global_step()
        )

    return tf.estimator.EstimatorSpec(mode=mode, loss=loss_mse, train_op=train_op)

def main(unused_argv):

    # Load video data [batch, 5, height, width]
    data_path = '/xxx/train.npy'
    train_data = np.load(data_path)

    # Set up Estimator
    AutoEncoder = tf.estimator.Estimator(
        model_fn=model_fn, model_dir=None
    )

    # Set up input_fn pipeline
    train_input_fn = tf.estimator.inputs.numpy_input_fn(
        x={"x":train_data},
        y=train_data,
        batch_size=10,
        num_epochs=100,
        shuffle=True
    )

    # Start train
    AutoEncoder.train(
        input_fn=train_input_fn,
        steps=15000,
        hooks=hooks
    )

if __name__ == "__main__":

    with tf.device("/gpu:0"):
        tf.app.run()

1 个答案:

答案 0 :(得分:0)

我将在这里回答我自己的问题。

答案是。我们可以在estimator框架下在model_fn中使用python for循环,如上面的第二段代码所示。

如果运行第二个sinppet,由于Estimator Framework本身会生成Tensorboard Logging,我们可以通过“ tensorboard --logdir ='path_to_model'”轻松检出图形结构。您会看到 AutoEncoder 模块运行了5次,而损耗又相加了5次,这证明了我的猜测。

在某种程度上,这是不平凡的问题。借助 for循环,我们可以输入任何类型的顺序数据,对其进行处理,然后一起优化模型。例如,我可以处理视频的每一帧,计算重建损失,然后反向传播总体损失。