使用TensorFlow创建Siamese网络时的ValueError

时间:2017-05-02 06:46:48

标签: tensorflow deep-learning

我正在尝试使用Siamese Network来确定两个输入是否相同。以下是Siamese网络的简短摘要:

  

暹罗网络是由两个相同的神经网络组成的网络   绑定权重的网络(两个网络的权重是   相同)。给定两个输入X_1和X_2,X_1被馈送到第一网络   和X_2到第二个网络。然后,来自两个网络的输出   结合并产生问题的答案:是两个输入   相似还是不同?

我使用tensorflow创建了以下网络,但是我遇到了错误。

graph = tf.Graph()

# Add nodes to the graph
with graph.as_default():
    with tf.variable_scope('siamese_network') as scope:
        labels = tf.placeholder(tf.int32, [None, None], name='labels')
        keep_prob = tf.placeholder(tf.float32, name='question1_keep_prob')

        question1_inputs = tf.placeholder(tf.int32, [None, None], name='question1_inputs')

        question1_embedding = tf.get_variable(name='embedding', initializer=tf.random_uniform((n_words, embed_size), -1, 1))
        question1_embed = tf.nn.embedding_lookup(question1_embedding, question1_inputs)

        question1_lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
        question1_drop = tf.contrib.rnn.DropoutWrapper(question1_lstm, output_keep_prob=keep_prob)
        question1_multi_lstm = tf.contrib.rnn.MultiRNNCell([question1_drop] * lstm_layers)

        initial_state = question1_multi_lstm.zero_state(batch_size, tf.float32)

        question1_outputs, question1_final_state = tf.nn.dynamic_rnn(question1_multi_lstm, question1_embed, initial_state=initial_state, scope='question1_siamese')
        question1_predictions = tf.contrib.layers.fully_connected(question1_outputs[:, -1], 1, activation_fn=tf.sigmoid)

        scope.reuse_variables()

        question2_inputs = tf.placeholder(tf.int32, [None, None], name='question2_inputs')

        question2_embedding = tf.get_variable(name='embedding', initializer=tf.random_uniform((n_words, embed_size), -1, 1))
        question2_embed = tf.nn.embedding_lookup(question2_embedding, question2_inputs)

        question2_lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
        question2_drop = tf.contrib.rnn.DropoutWrapper(question2_lstm, output_keep_prob=keep_prob)
        question2_multi_lstm = tf.contrib.rnn.MultiRNNCell([question2_drop] * lstm_layers)

        question2_outputs, question2_final_state = tf.nn.dynamic_rnn(question2_multi_lstm, question2_embed, initial_state=initial_state)
        question2_predictions = tf.contrib.layers.fully_connected(question2_outputs[:, -1], 1, activation_fn=tf.sigmoid)

我在以下行收到以下错误:

question2_outputs, question2_final_state = tf.nn.dynamic_rnn(question2_multi_lstm, question2_embed, initial_state=initial_state)

这是错误:

ValueError: Variable siamese_network/rnn/multi_rnn_cell/cell_0/basic_lstm_cell/weights does not exist, 
or was not created with tf.get_variable(). Did you mean to set reuse=None in VarScope?

问题出在以下几行:

question1_outputs, question1_final_state = tf.nn.dynamic_rnn(question1_multi_lstm, question1_embed, initial_state=initial_state, scope='question1_siamese')

我只需要删除scope属性,它运行正常。

1 个答案:

答案 0 :(得分:2)

致电时

scope.reuse_variables()

您告诉tensorflow,以后使用的变量已经声明并且应该重复使用。然而,您的Siamese网络共享一些但不是全部的变量;更准确地说,question2_outputsquestion2_final_statquestion2_predictions对于您的第二个网络是唯一的,并且不会重复使用权重。

在您当前的代码中,由于所有内容都是平放的,您实际上不需要调用reuse_variables,您只需编写

question2_embedding = question1_embedding

你应该没事。当您开始在函数中封装公共网络时,reuse_variables会派上用场。你可以写点像

with tf.variable_scope('siamese_common') as scope:
  net1 = siamese_common(question1_input)
  scope.reuse_variables()
  net2 = siamese_common(question2_input)

获取插入第一和第二网络各自输出的公共部分。