如何在Tensorflow中结合CNN和LSTM?

时间:2018-09-16 11:21:05

标签: python tensorflow lstm

如何正确地将CNN和LSTM结合在一起?

Here is my structure.在这里,每个数据都是一个长度可变的列表,并且每个项目都具有固定的形状-在下面的代码中为(1、9、9、1)。如果单个数据的长度为N,则长度个CNN 将返回长度N列表(N个项目以相同的权重(3 * 3单个过滤器)和偏差变量通过CNN传递,并且它应该是LSTM的输入,迭代N次。

我想获得形状2的LSTM的最后结果,该结果用于分类为两个类。

# tf Graph input
# Can I set x as a (variable-length) list of vector with shape (1, 9, 9, 1)
x = tf.placeholder("float", [None, None, None, None]) # ?
y = tf.placeholder("float", [None, 2])

w_conv = tf.Variable(tf.truncated_normal([3, 3, 1, 1], stddev = 0.1, dtype = tf.float32), name = "w")
b_conv = tf.Variable(tf.constant(0.1, dtype = tf.float32, shape = [1]), name = "b")

# To be simple, let batch_size = 1
# data1 - x[0].shape = x[1].shape = (1, 9, 9, 1)
# data2 - x[0].shape = x[1].shape = x[2].shape = (1, 9, 9, 1)

# conv+relu
x_2 = tf.nn.relu(tf.nn.conv2d(x, w_conv, [1, 1, 1, 1], "SAME") + b_conv)

# data1 - x_2[0].shape = x_2[1].shape = (1, 9, 9, 1)
# data2 - x_2[0].shape = x_2[1].shape = x_2[2].shape = (1, 9, 9, 1)

# maxpool
x_3 = tf.nn.max_pool(x_2, [1, 3, 3, 1], [1, 3, 3, 1], "SAME")

# data1 - x_3[0].shape = x_3[1].shape = (1, 3, 3, 1)
# data2 - x_3[0].shape = x_3[1].shape = x_3[2].shape = (1, 3, 3, 1)

# lstm
x_4 = tf.reshape(x, [-1, 9]) # 9?
lstm_cell = rnn.BasicLSTMCell(2)
stacked = rnn.MultiRNNCell(lstm_cell)
outputs, states = tf.nn.dynamic_rnn(lstm_cell, x_4, dtype = tf.float32) # x_4?
y_ = tf.matmul(outputs[-1], tf.Variable(tf.random_normal([2, 2]))) + tf.Variable(tf.random_normal([2]))

# loss and optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(logits=y_, labels=y))
optimizer = tf.train.RMSPropOptimizer(learning_rate=1e-4).minimize(cost)

# Model evaluation
correct_pred = tf.equal(tf.argmax(pred,1), tf.argmax(y,1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))

with tf.Session() as session:
    sess.run([optimizer], feed_dict = {x:???, y:[0, 1]})

0 个答案:

没有答案
相关问题