TensorFlow dynamic_rnn输入用于回归

时间:2017-07-02 13:40:53

标签: python tensorflow regression

我一直试图将现有的张量流序列转换为序列分类器到回归量。

目前我一直在处理tf.nn.dynamic_rnn()的输入。根据文档和其他答案,输入应为(batch_size, sequence_length, input_size)的形状。但是,我的输入数据只有两个维度:(sequence_length, batch_size)

原始解决方案在将输入提供给tf.nn.embedding_lookup()之前使用dynamic_rnn()作为中间步骤。如果我理解正确,我相信我不需要这一步,因为我正在研究回归问题,而不是分类问题。

我是否需要embedding_lookup步骤?如果是这样,为什么?如果没有,我如何将encoder_inputs直接放入dynamic_rnn()

以下是一般工作的最小化示例:

import numpy as np
import tensorflow as tf

tf.reset_default_graph()
sess = tf.InteractiveSession()

PAD = 0
EOS = 1
VOCAB_SIZE = 10 # Don't think I should need this for regression?
input_embedding_size = 20

encoder_hidden_units = 20
decoder_hidden_units = encoder_hidden_units

LENGTH_MIN = 3
LENGTH_MAX = 8
VOCAB_LOWER = 2
VOCAB_UPPER = VOCAB_SIZE
BATCH_SIZE = 10

def get_random_sequences():
    sequences = []
    for j in range(BATCH_SIZE):
        random_numbers = np.random.randint(3, 10, size=8)
        sequences.append(random_numbers)
    sequences = np.asarray(sequences).T
    return(sequences)

def next_feed():
    batch = get_random_sequences()

    encoder_inputs_ = batch
    eos = np.ones(BATCH_SIZE)
    decoder_targets_ = np.hstack((batch.T, np.atleast_2d(eos).T)).T
    decoder_inputs_ = np.hstack((np.atleast_2d(eos).T, batch.T)).T

    #print(encoder_inputs_)
    #print(decoder_inputs_)

    return {
        encoder_inputs: encoder_inputs_,
        decoder_inputs: decoder_inputs_,
        decoder_targets: decoder_targets_,
    }

### "MAIN"

# Placeholders
encoder_inputs = tf.placeholder(shape=(LENGTH_MAX, BATCH_SIZE), dtype=tf.int32, name='encoder_inputs')
decoder_targets = tf.placeholder(shape=(LENGTH_MAX + 1, BATCH_SIZE), dtype=tf.int32, name='decoder_targets')
decoder_inputs = tf.placeholder(shape=(LENGTH_MAX + 1, BATCH_SIZE), dtype=tf.int32, name='decoder_inputs')

# Don't think I should need this for regression problems
embeddings = tf.Variable(tf.random_uniform([VOCAB_SIZE, input_embedding_size], -1.0, 1.0), dtype=tf.float32)
encoder_inputs_embedded = tf.nn.embedding_lookup(embeddings, encoder_inputs)
decoder_inputs_embedded = tf.nn.embedding_lookup(embeddings, decoder_inputs)

# Encoder RNN
encoder_cell = tf.contrib.rnn.LSTMCell(encoder_hidden_units)
encoder_outputs, encoder_final_state = tf.nn.dynamic_rnn(
    encoder_cell, encoder_inputs_embedded, # Throws 'ValueError: Shape (8, 10) must have rank at least 3' if encoder_inputs is used
    dtype=tf.float32, time_major=True,
)

# Decoder RNN
decoder_cell = tf.contrib.rnn.LSTMCell(decoder_hidden_units)
decoder_outputs, decoder_final_state = tf.nn.dynamic_rnn(
    decoder_cell, decoder_inputs_embedded, 
    initial_state=encoder_final_state,
    dtype=tf.float32, time_major=True, scope="plain_decoder",
)
decoder_logits = tf.contrib.layers.linear(decoder_outputs, VOCAB_SIZE)
decoder_prediction = tf.argmax(decoder_logits, 2)

# Loss function
loss = tf.reduce_mean(tf.squared_difference(decoder_logits, tf.one_hot(decoder_targets, depth=VOCAB_SIZE, dtype=tf.float32)))
train_op = tf.train.AdamOptimizer().minimize(loss)


sess.run(tf.global_variables_initializer())

max_batches = 5000
batches_in_epoch = 500

print('Starting train')
try:
    for batch in range(max_batches):
        feed = next_feed()
        _, l = sess.run([train_op, loss], feed)

        if batch == 0 or batch % batches_in_epoch == 0:
            print('batch {}'.format(batch))
            print('  minibatch loss: {}'.format(sess.run(loss, feed)))
            predict_ = sess.run(decoder_prediction, feed)
            for i, (inp, pred) in enumerate(zip(feed[encoder_inputs].T, predict_.T)):
                print('  sample {}:'.format(i + 1))
                print('    input     > {}'.format(inp))
                print('    predicted > {}'.format(pred))
                if i >= 2:
                    break
            print()
except KeyboardInterrupt:
    print('training interrupted')

我在stackoverflow上已经阅读了类似的问题,但我发现自己仍然对如何解决这个问题感到困惑。

编辑: 我想我应该澄清一下上面的代码运行良好,但是真正理想的输出应该模仿一个有噪声的信号(例如文本到语音),这就是为什么我认为我需要连续的输出值而不是单词或字母。

1 个答案:

答案 0 :(得分:1)

如果您尝试连续播放,为什么不能仅仅重新塑造输入占位符以使其成为.work--overlay { position: relative; z-index: 50; width: 100%; height: 100%; background: radial-gradient( rgba(64,56,43,0), rgba(0,0,0,0), rgba(0,0,0,0)); transition: opacity 0.3s ease-in-out; } .work--overlay:before { background: radial-gradient( rgba(64,56,43,0.6), rgba(0,0,0,0.6), rgba(0,0,0,0.6)); top:0; left: 0; width: 100%; height: 100%; z-index: -100; position: absolute; content: ''; opacity: 0; transition: opacity 3s ease-in-out; } .work--overlay:hover:before { opacity:1; } 形状,并通过[BATCH, TIME_STEPS, 1]将一个额外的维度添加到输入中。这样,您的输入就会匹配tf.expand_dims(input, 2)期望的维度(实际上在您的情况下,因为您正在进行dynamic_rnn您的输入应该是time_major=True

的形状

我很想知道你如何处理从单元格大小到1的输出维度的切换。现在你有了这一行:

[TIME_STEPS, BATCH, 1])

但是既然你不再进行分类,那么decoder_logits = tf.contrib.layers.linear(decoder_outputs, VOCAB_SIZE) 只是1?几天前我在这里问了一个类似的问题,但没有得到任何答复。我这样做(使用1),但不确定它是否合适(在实践中似乎有点工作,但并不完美)。