卷积后的LSTM细胞

时间:2017-10-22 12:43:39

标签: python tensorflow lstm convolution

我需要在两个卷积层之后实现LSTM层。这是第一次卷积后的代码:

convo_2 = convolutional_layer(convo_1_pooling, shape=[5, 5, 32, 64])
convo_2_pooling = max_pool_2by2(convo_2)
convo_2_flat = tf.reshape(convo_2_pooling, shape=[-1, 64 * 50 * 25])
cell = rnn.LSTMCell(num_units=100, activation=tf.nn.relu)
cell = rnn.OutputProjectionWrapper(cell, output_size=7)
conv_to_rnn = int(convo_2_flat.get_shape()[1])
outputs, states = tf.nn.dynamic_rnn(cell, convo_2_flat, dtype=tf.float32)

我在最后一行收到此错误:

ValueError: Shape (?, 50, 64) must have rank 2

我必须指出convo_2_flat变量的时间步长,对吧?怎么样?我真的不知道那样做。

编辑:
重塑之后:

 convo_2_flat = tf.reshape(convo_2_flat, shape=[-1, N_TIME_STEPS, INPUT_SIZE])

其中

N_TIME_STEPS = 25
INPUT_SIZE = int(64 * 50 * 25 / N_TIME_STEPS)

我收到此错误:InvalidArgumentError(请参阅上面的回溯):logits和label必须大小相同:logits_size = [5000,7] labels_size = [50,7]在这一行: cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y_true, logits=outputs)) 在我看来,批量大小在最后一次重塑之后已经改变了。

编辑2:
下面的代码是错的吗?

convo_2_shape = convo_2_pooling.get_shape().as_list()
shape_convo_flat = convo_2_shape[1] * convo_2_shape[2] * convo_2_shape[3]
N_TIME_STEPS = convo_2_shape[1]
INPUT_SIZE = tf.cast(shape_convo_flat / N_TIME_STEPS, tf.int32)
convo_2_out = tf.reshape(convo_2_pooling, shape=[-1, shape_convo_flat])
convo_2_out = tf.reshape(convo_2_out, shape=[-1, N_TIME_STEPS, INPUT_SIZE])

我以这种方式设置N_TIME_STEPS,否则我会有一个浮动INPUT_SIZE并且tf会抛出错误。

1 个答案:

答案 0 :(得分:3)

根据Tensorflow文档(https://www.tensorflow.org/api_docs/python/tf/nn/dynamic_rnn

输入应该是以下形状(我在这里使用默认值), enter image description here

[BATCH_SIZE,N_TIME_STEPS,INPUT_SIZE] 。因此,您可以按如下方式重塑 convo_2_flat

#get the shape of the output of max pooling
shape = convo_2_pooling.get_shape().as_list()
#flat accordingly
convo_2_flat = tf.reshape(convo_2_pooling, [-1, shape[1] * shape[2] * shape[3]])

# Here shape[1] * shape[2] * shape[3]] = N_TIME_STEPS*INPUT_SIZE

#reshape according to dynamic_rnn input
convo_2_flat = tf.reshape(convo_2_flat, shape=[-1, N_TIME_STEPS, INPUT_SIZE])

outputs, states = tf.nn.dynamic_rnn(cell, convo_2_flat, dtype=tf.float32)

# get the output of the last time step
val = tf.transpose(outputs, [1, 0, 2])
lstm_last_output = val[-1]

OUTPUT_SIZE = 7 #since you have defined in cell = rnn.OutputProjectionWrapper(cell, output_size=7)

W = {
        'output': tf.Variable(tf.random_normal([OUTPUT_SIZE, N_CLASSES]))
    }
biases = {
        'output': tf.Variable(tf.random_normal([N_CLASSES]))
    }

#Dense Layer
pred_Y= tf.matmul(lstm_last_output, W['output']) + biases['output']
#Softmax Layer
pred_softmax = tf.nn.softmax(pred_Y)

cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y_true, logits=pred_softmax))

关于输出的注意事项:

根据文档,dynamic_rnn的输出如下,

enter image description here

[BATCH_SIZE,N_TIME_STEPS,OUTPUT_SIZE] 。因此,每个时间步都有一个输出。在上面的代码中,我只得到最后一步的输出。或者,您可以考虑rnn输出的不同体系结构,在此描述(How do we use LSTM to classify sequences?),

希望这有帮助。