如何在Tensorflow中获得LSTM的密集层输出?

时间:2019-03-11 18:19:44

标签: python tensorflow deep-learning lstm

我正在使用Tensorflow对具有单个致密层的LSTM进行建模。我想完成的是从LSTM获得密集层输出/隐藏表示。我已经检查过Keras中可用类似的方法,但是在Tensorflow中如何使用呢?我在下面附加我的代码以解决该问题(参考LSTM on sequential data, predicting a discrete column):-

# clear graph (if any) before running
tf.reset_default_graph()

X = tf.placeholder(tf.float32, [None, time_steps, inputs], name = "Inputs")
y = tf.placeholder(tf.float32, [None, outputs], name = "Outputs")

# LSTM Cell
cell = tf.contrib.rnn.BasicLSTMCell(num_units=neurons, activation=tf.nn.relu)
cell_outputs, states = tf.nn.dynamic_rnn(cell, X, dtype=tf.float32)

# pass into Dense layer
stacked_outputs = tf.reshape(cell_outputs, [-1, neurons])
out = tf.layers.dense(inputs=stacked_outputs, units=outputs)

# squared error loss or cost function for linear regression
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(
    labels=y, logits=out))

# optimizer to minimize cost
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
training_op = optimizer.minimize(loss)


with tf.Session() as sess:
    # initialize all variables
    tf.global_variables_initializer().run()
    tf.local_variables_initializer().run()

    # Train the model
    for steps in range(epochs):
        mini_batch = zip(range(0, length, batch_size),
                         range(batch_size, length + 1, batch_size))

        # train data in mini-batches
        for (start, end) in mini_batch:
            sess.run(training_op, feed_dict={X: X_train[start:end, :, :],
                                             y: y_train[start:end, :]})

        # print training performance
        if (steps + 1) % display == 0:
            # evaluate loss function on training set
            loss_fn = loss.eval(feed_dict={X: X_train, y: y_train})
            print('Step: {}  \tTraining loss: {}'.format((steps + 1), loss_fn))

我附加的代码特定于训练集,但是我认为该过程应该与为测试集提供字典非常相似。是否有任何一个衬里/短代码段可以返回密集层输出(输入数据的隐藏表示)。在这方面的任何帮助都将受到高度赞赏。

1 个答案:

答案 0 :(得分:1)

Session上下文管理器中时,这是最短的方法: out_vals = out.eval({X: X_train})

等效于此: out_vals = sess.run(out, feed_dict={X: X_train})

您不需要为向前传播而添加标签(如果您只是评估密集层)。