seq2seq中的tf.nn.dynamic_rnn形状错误

时间:2017-09-26 05:48:29

标签: python tensorflow

我正在尝试编写自己的基本seq2seq分类器。我使用tf.nn.dynamic_rnn执行此操作,代码如下所示。但是,发送到tf.nn.dynamic_rnn的张量的形状似乎存在问题。我这样做的原因是因为当谈到seq2seq时,tensorflow的文档遍布整个地方。

正在运行

import numpy as np
source_batch = np.random.randint(x_letters, size=[batch_size, x_seq_length])
target_batch = np.random.randint(y_letters, size=[batch_size, y_seq_length+1])

sess.run(tf.global_variables_initializer())
loss = sess.run([loss],
            feed_dict = {inputs: source_batch, 
                         outputs: target_batch[:, :-1], 
                         targets: target_batch[:, 1:]})

给我错误:ValueError: Cannot feed value of shape (128, 10) for Tensor 'decoding/rnn/transpose:0', which has shape '(128, 10, 32)'

图表如下所示:

import tensorflow as tf

x_seq_length = 29
y_seq_length = 10

x_letters = 60
y_letters = 13

epochs = 2
batch_size = 128
nodes = 32
embed_size = 10

####################
# Tensorflow Graph
####################
tf.reset_default_graph()
sess = tf.InteractiveSession()

inputs = tf.placeholder(tf.int32, (batch_size, x_seq_length), 'inputs')
outputs = tf.placeholder(tf.int32, (batch_size, y_seq_length), 'output')
targets = tf.placeholder(tf.int32, (batch_size, y_seq_length), 'targets')

input_embedding = tf.Variable(tf.random_uniform((x_letters, embed_size), -1, 1), name='enc_embedding')
output_embedding = tf.Variable(tf.random_uniform((y_letters, embed_size), -1, 1), name='dec_embedding')

date_input_embed = tf.nn.embedding_lookup(input_embedding, inputs)
date_output_embed = tf.nn.embedding_lookup(output_embedding, outputs)

with tf.variable_scope("encoding") as encoding_scope:
    lstm_enc = tf.contrib.rnn.BasicLSTMCell(nodes)
    _, last_state = tf.nn.dynamic_rnn(lstm_enc, dtype=tf.float32,inputs=date_input_embed)

with tf.variable_scope("decoding") as decoding_scope:
    lstm_dec = tf.contrib.rnn.BasicLSTMCell(nodes)
    outputs, _ = tf.nn.dynamic_rnn(lstm_dec, inputs=date_output_embed, initial_state=last_state)

logits = tf.contrib.layers.fully_connected(outputs, num_outputs=y_letters, activation_fn=None) 

with tf.name_scope("optimization"):
    loss = tf.contrib.seq2seq.sequence_loss(logits, targets, tf.ones([batch_size, y_seq_length]))
    optimizer = tf.train.AdamOptimizer().minimize(loss)

1 个答案:

答案 0 :(得分:2)

您有两个名称为outputs的变量,即占位符和解码器输出。更改一个变量的名称。