Tensorflow动态RNN - 形状

时间:2018-05-19 13:41:39

标签: tensorflow rnn

亲爱的程序员,你好!

我有一个视频的多个帧,我想在我的RNN中获得尽可能多的图层,因为我可以为每个图层提供帧。

注释:
框架形状= 224,224,3(但我将它展平) 每个视频的帧数= 20 =内层的数量

目前我得到了这个:

timesteps = 20
inner_layer_size = 100
output_layer_size = 2

sdev = 0.1

inputs = 224 * 224 * 3

x = tf.placeholder(tf.float32, shape=(None, timesteps, inputs), name="x")
y = tf.placeholder(tf.int32, shape=(None), name="y")

# Compute the layers
lstm_cell = tf.contrib.rnn.LSTMCell(num_units=inner_layer_size)
outputs, state = tf.nn.dynamic_rnn(cell=lstm_cell, dtype=tf.float32, inputs=x)

Wz = tf.get_variable(name="Wz", shape=(inner_layer_size, output_layer_size),
                         initializer=tf.truncated_normal_initializer(stddev=sdev))
bz = tf.get_variable(name="bz", shape=(1, output_layer_size),
                         initializer=tf.constant_initializer(0.0))

logits = tf.matmul(state, Wz) + bz
prediction = tf.nn.softmax(logits)

我知道这并不是我想要的方式。 如果您在第一张图片上看here,则清楚每个图层的输入是框架的一部分,而不是整个图层的一部分。

我现在的问题是如何改变这一点,以及如何调整我的“Wz'和''然后? 感谢您抽出宝贵时间:)

1 个答案:

答案 0 :(得分:0)

问题是您是将LSTM的state传递到密集层而不是outputs

您案例中的输出将为[None, 68, 100]。您需要拆分time_steps,然后将其传递到密集层。这可以通过以下代码实现:

# LSTM output
lstm_cell = tf.contrib.rnn.LSTMCell(num_units=inner_layer_size)
outputs, state = tf.nn.dynamic_rnn(cell=lstm_cell, dtype=tf.float32, inputs=x)

#Split the outputs across time_steps.
lstm_sequence = tf.split(outputs, tf.ones((timesteps), dtype=tf.int32 ), 1)

#Dense layer to be applied for each time steps.
def dense(inputs, reuse=False):   
   with tf.variable_scope('MLP', reuse=reuse):
      Wz = tf.get_variable(name="Wz", shape=(inner_layer_size, output_layer_size),
                  initializer=tf.truncated_normal_initializer(stddev=sdev))
      bz = tf.get_variable(name="bz", shape=(1, output_layer_size),
                     initializer=tf.constant_initializer(0.0))

      logits = tf.matmul(inputs, Wz) + bz
      prediction = tf.nn.softmax(logits)
      return prediction

# Pass each time step outputs of the LSTM to the dense layer. 
#The layer should have shared weights    
out = []
for i, frame in enumerate(lstm_sequence):
   if i == 0:
      out.append(dense(tf.reshape(frame, [-1, inner_layer_size])))
   else:
      out.append(dense(tf.reshape(frame, [-1, inner_layer_size]),reuse=True))