TensorFlow在使用GradientDescentOptimizer之外的优化器时添加重复键

时间:2017-02-04 21:50:24

标签: tensorflow

当我使用简单的GradientDescentOptimizer时,我的模型运行正常,但是如果我将其更改为任何其他的(Adam,RMSProp等),我会得到一个看起来像这样的InvalidArgumentError:

InvalidArgumentError (see above for traceback): Adding duplicate key: Model/Learn/softmax_w//softmax_w/part_0/RMSProp
 [[Node: save/SaveV2 = SaveV2[dtypes=[DT_INT64, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](_recv_save/Const_0, save/SaveV2/tensor_names, save/SaveV2/shape_and_slices, Model/Learn/global_step, Model/Learn/lr, Model/Learn/softmax_b, Model/Learn/softmax_w/part_0/read, Valid/Model/Learn/Model/Learn/softmax_w/part_0/RMSProp/read, Train/Model/Learn/Model/Learn/softmax_w/part_0/RMSProp/read, Valid/Model/Learn/Model/Learn/softmax_w/part_0/RMSProp_1/read, Train/Model/Learn/Model/Learn/softmax_w/part_0/RMSProp_1/read, Model/RNNPath/RNN/MultiRNNCell/Cell0/LSTMCell/B, Model/RNNPath/RNN/MultiRNNCell/Cell0/LSTMCell/W_0, Model/RNNPath/RNN/MultiRNNCell/Cell1/LSTMCell/B, Model/RNNPath/RNN/MultiRNNCell/Cell1/LSTMCell/W_0, Model/RNNPath/context_embedding_W, Model/RNNPath/context_embedding_b, Model/RNNPath/pron_lookup_shard_0, Model/RNNPath/pron_lookup_shard_1, Model/RNNPath/pron_lookup_shard_2, Model/RNNPath/pron_lookup_shard_3, Model/RNNPath/pron_lookup_shard_4, Model/RNNPath/pron_lookup_shard_5, Model/RNNPath/pron_lookup_shard_6, Model/RNNPath/pron_lookup_shard_7, Model/RNNPath/pronunciation_embedding_W, Model/RNNPath/pronunciation_embedding_b, Model/RNNPath/rnn_resize_b, Model/RNNPath/rnn_resize_w, Model/RNNPath/word_vector_shard_0, Model/RNNPath/word_vector_shard_1, Model/RNNPath/word_vector_shard_2, Model/RNNPath/word_vector_shard_3, Model/RNNPath/word_vector_shard_4, Model/RNNPath/word_vector_shard_5, Model/RNNPath/word_vector_shard_6, Model/RNNPath/word_vector_shard_7, Train/Model/Learn/Model/Learn/softmax_b/RMSProp, Train/Model/Learn/Model/Learn/softmax_b/RMSProp_1, Train/Model/Learn/Model/RNNPath/RNN/MultiRNNCell/Cell0/LSTMCell/B/RMSProp, Train/Model/Learn/Model/RNNPath/RNN/MultiRNNCell/Cell0/LSTMCell/B/RMSProp_1, Train/Model/Learn/Model/RNNPath/RNN/MultiRNNCell/Cell0/LSTMCell/W_0/RMSProp, Train/Model/Learn/Model/RNNPath/RNN/MultiRNNCell/Cell0/LSTMCell/W_0/RMSProp_1, Train/Model/Learn/Model/RNNPath/RNN/MultiRNNCell/Cell1/LSTMCell/B/RMSProp, Train/Model/Learn/Model/RNNPath/RNN/MultiRNNCell/Cell1/LSTMCell/B/RMSProp_1, Train/Model/Learn/Model/RNNPath/RNN/MultiRNNCell/Cell1/LSTMCell/W_0/RMSProp, Train/Model/Learn/Model/RNNPath/RNN/MultiRNNCell/Cell1/LSTMCell/W_0/RMSProp_1, Train/Model/Learn/Model/RNNPath/rnn_resize_b/RMSProp, Train/Model/Learn/Model/RNNPath/rnn_resize_b/RMSProp_1, Train/Model/Learn/Model/RNNPath/rnn_resize_w/RMSProp, Train/Model/Learn/Model/RNNPath/rnn_resize_w/RMSProp_1, Valid/Model/Learn/Model/Learn/softmax_b/RMSProp, Valid/Model/Learn/Model/Learn/softmax_b/RMSProp_1, Valid/Model/Learn/Model/RNNPath/RNN/MultiRNNCell/Cell0/LSTMCell/B/RMSProp, Valid/Model/Learn/Model/RNNPath/RNN/MultiRNNCell/Cell0/LSTMCell/B/RMSProp_1, Valid/Model/Learn/Model/RNNPath/RNN/MultiRNNCell/Cell0/LSTMCell/W_0/RMSProp, Valid/Model/Learn/Model/RNNPath/RNN/MultiRNNCell/Cell0/LSTMCell/W_0/RMSProp_1, Valid/Model/Learn/Model/RNNPath/RNN/MultiRNNCell/Cell1/LSTMCell/B/RMSProp, Valid/Model/Learn/Model/RNNPath/RNN/MultiRNNCell/Cell1/LSTMCell/B/RMSProp_1, Valid/Model/Learn/Model/RNNPath/RNN/MultiRNNCell/Cell1/LSTMCell/W_0/RMSProp, Valid/Model/Learn/Model/RNNPath/RNN/MultiRNNCell/Cell1/LSTMCell/W_0/RMSProp_1, Valid/Model/Learn/Model/RNNPath/rnn_resize_b/RMSProp, Valid/Model/Learn/Model/RNNPath/rnn_resize_b/RMSProp_1, Valid/Model/Learn/Model/RNNPath/rnn_resize_w/RMSProp, Valid/Model/Learn/Model/RNNPath/rnn_resize_w/RMSProp_1)]]

问题似乎与分割softmax层有关。当我删除分区程序时,问题也会消失(对于可变和固定大小的分区程序,情况也是如此)。优化器代码如下所示:

        tvars = tf.trainable_variables()
        optimizer = tf.train.GradientDescentOptimizer(self._lr)
        grads_and_vars = optimizer.compute_gradients(cost, tvars)
        _train_op = optimizer.apply_gradients(
            grads_and_vars,
            global_step=tf.contrib.framework.get_or_create_global_step())

0 个答案:

没有答案