如何使用tensorflow的Dataset API Iterator作为(周期性)神经网络的输入?

时间:2017-11-20 13:36:56

标签: tensorflow rnn tensorflow-datasets

使用tensorflow的数据集API迭代器时,我的目标是定义一个在迭代器的get_next()张量上运算的RNN作为其输入(请参阅代码中的(1) )。

但是,只需将dynamic_rnn定义为get_next()作为输入会导致错误:ValueError: Initializer for variable rnn/basic_lstm_cell/kernel/ is from inside a control-flow construct, such as a loop or conditional. When creating a variable inside a loop or conditional, use a lambda as the initializer.

现在我知道一个解决方法是简单地为next_batch创建一个占位符,然后为张量创建eval()(因为你不能通过张量本身)并使用{{1}传递它(请参阅代码中的feed_dictX)。 但是,如果我理解正确,这不是一个有效的解决方案,因为我们首先评估然后重新初始化张量。

有没有办法:

  1. 直接在Iterator的输出顶部定义(2);
  2. 或:

    1. 以某种方式直接将现有dynamic_rnn张量传递给作为get_next()输入的占位符?
    2. 完整的工作示例; dynamic_rnn版本是我想要的,但它没有,而(1)是可行的解决方法。

      (2)

      (使用tensorflow 1.4.0,Python 3.6。)

      非常感谢:)

1 个答案:

答案 0 :(得分:5)

原来这个神秘的错误可能是张量流中的错误,请参阅https://github.com/tensorflow/tensorflow/issues/14729。更具体地说,错误实际上来自于提供错误的数据类型(在上面的示例中,data数组包含int32值,但它应该包含浮点数。)

而不是ValueError: Initializer for variable rnn/basic_lstm_cell/kernel/ is from inside a control-flow construct错误, 张量流应该返回:
TypeError: Tensors in list passed to 'values' of 'ConcatV2' Op have types [int32, float32] that don't all match.(见1)。

要解决此问题,只需更改
data = [ [[1], [2], [3]], [[4], [5], [6]], [[1], [2], [3]] ]

data = np.array([[ [1], [2], [3]], [[4], [5], [6]], [[1], [2], [3]] ], dtype=np.float32)

然后以下代码可以正常工作:

import tensorflow as tf
import numpy as np

from tensorflow.contrib.rnn import BasicLSTMCell
from tensorflow.python.data import Iterator

data = np.array([[ [1], [2], [3]], [[4], [5], [6]], [[1], [2], [3]] ], dtype=np.float32)
dataset = tf.data.Dataset.from_tensor_slices(data)
dataset = dataset.batch(2)
iterator = Iterator.from_structure(dataset.output_types,
                                   dataset.output_shapes)
next_batch = iterator.get_next()
iterator_init = iterator.make_initializer(dataset)

# (2):
# X = tf.placeholder(tf.float32, shape=(None, 3, 1))

cell = BasicLSTMCell(num_units=8)

# (1):
outputs, states = lstm_outputs, lstm_states = tf.nn.dynamic_rnn(cell, next_batch, dtype=tf.float32)

# (2):
# outputs, states = lstm_outputs, lstm_states = tf.nn.dynamic_rnn(cell, X, dtype=tf.float32)

init = tf.global_variables_initializer()
with tf.Session() as sess:
    sess.run(init)
    sess.run(iterator_init)

    # (1):
    o, s = sess.run([outputs, states])
    o, s = sess.run([outputs, states])

    # (2):
    # o, s = sess.run([outputs, states], feed_dict={X: next_batch.eval()})
    # o, s = sess.run([outputs, states], feed_dict={X: next_batch.eval()})