在LSTM中的张量流网络中构建具有多标签序列的批次

时间:2016-12-12 10:46:43

标签: tensorflow sequence dimensions lstm

我对NN很新,我试图在Python和tensorflow中解决以下问题:

我有一个4个时间序列的序列,每个时间步长包含3个输入

开始序列

  • 1,1,1
  • 2,2,2-
  • -3,3,3-
  • 4,4,4

结束序列

  • 5,5,5
  • 6,6,6-
  • 7,7,7
  • 8,8,8

然后我的标签也包含了4个时间步长,但每个标签的长度仅为2 ouptuts

开始序列

  • 1,1
  • 2,2
  • 3,3
  • 4,4

结束序列

  • 5,5
  • 6,6
  • 7,7
  • 8,8

一般情况下,我想预测一系列多个输入的多个输出序列。

  • 9,9,9
  • 10,10,10
  • 11,11,11
  • 12,12,12

输出:

  • ?,?
  • ?,?
  • ?,?
  • ?,?

我在网上尝试了很多教程,但我最终遇到了形状错误,我知道问题在于LSTM单元提供的尺寸。看起来它们中的大多数最终只允许一个维度我也不知道如何在我的批处理功能中正确批量处理这种数据结构。

一些代码段:

num_samples = 8 #how many data samples to load in tensor
num_inputs = 3
num_labels = 2
timesteps = 4


x = tf.placeholder("float", [None, num_samples, num_inputs , 1])
y = tf.placeholder("float", [None, num_labels])

“x”placehholder中有一个“1”。它只接受这种结构。 但我希望有类似的东西:

x = tf.placeholder("float", [None, num_samples, batch, num_inputs])

我的TF特定代码段:

 x = tf.transpose(x, [1, 0, 2])
# Reshaping to (timesteps *batch_size, num_inputs )
x = tf.reshape(x, [-1, 1])
# Split to get a list of 'timesteps' tensors of shape (batch_size, num_inputs )
x = tf.split(0, timesteps , x)


lstm_cell = tf.nn.rnn_cell.BasicLSTMCell(n_hidden,state_is_tuple=True)



outputs, states = tf.nn.rnn(lstm_cell, x, dtype=tf.float32,
                            sequence_length=timesteps )


outputs = tf.pack(outputs)
print np.array(outputs)
outputs = tf.transpose(outputs, [1, 0, 2])



# Hack to build the indexing and retrieve the right output.
batch_size = tf.shape(outputs)[0]


# Start indices for each sample
index = tf.range(0, batch_size) * seq_max_len + (seqlen - 1)
# Indexing
outputs = tf.gather(tf.reshape(outputs, [-1, n_hidden]), index)

# Linear activation, using outputs computed above
return tf.matmul(outputs, weights['out']) + biases['out']

pred = dynamicRNN(x, seqlen, weights, biases)

# Define loss and optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(pred, y))
optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate).minimize(cost)ent

# Evaluate model
correct_pred = tf.equal(tf.argmax(pred,1), tf.argmax(y,1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))

培训部分的代码:

 while step * batch_size < training_iters:
    batch_x, batch_y, batch_seqlen = trainset.next(batch_size)
    # Run optimization op (backprop)


    sess.run(optimizer, feed_dict={x: batch_x, y: batch_y,
                                   seqlen: batch_seqlen})
    if step % display_step == 0:
        # Calculate batch accuracy
        acc = sess.run(accuracy, feed_dict={x: batch_x, y: batch_y,
                                            seqlen: batch_seqlen})
        # Calculate batch loss
        loss = sess.run(cost, feed_dict={x: batch_x, y: batch_y,
                                         seqlen: batch_seqlen})
        print("Iter " + str(step*batch_size) + ", Minibatch Loss= " + \
              "{:.6f}".format(loss) + ", Training Accuracy= " + \
              "{:.5f}".format(acc))
    step += 1

有没有人为我的问题提供简单的代码段?

祝你好运

0 个答案:

没有答案