当输入向量中有一些零时,运行biLstm模型将获得“浮动误差8”

时间:2018-08-12 21:10:54

标签: python tensorflow deep-learning lstm

我自己执行填充过程,因此'debate','reason','claim'或'warrant'中有一些0值。放入BiLSTM体系结构将获得“浮动错误8”,而无需任何其他提醒。

此错误表示某些数字被0除或索引超出范围。

但是在模型中,不应有任何数字除以0。

代码如下:

debate = tf.placeholder(tf.float32,[None,48,300])
reason = tf.placeholder(tf.float32,[None,48,300])
claim = tf.placeholder(tf.float32,[None,48,300])
warrant = tf.placeholder(tfenter code here.float32,[None,48,300])
y = tf.placeholder(tf.float32,[None,2])

n_hidden = 300

w = weight_variable([n_hidden,2])
b = bias_variable([2])
def bilstm(x, weights, biases):

    lstm_f = tf.contrib.rnn.LSTMCell(n_hidden, forget_bias = 1.0)
    lstm_b = tf.contrib.rnn.LSTMCell(n_hidden, forget_bias = 1.0)

    (alloutputs, output_states) = tf.nn.bidirectional_dynamic_rnn(lstm_f, lstm_b, x, dtype = tf.float32)
    (outputs, state) = output_states
    (output_state_fw, output_state_bw) = state

    return tf.matmul(tf.add(output_state_fw, output_state_bw), weights) + biases
final_representation = tf.concat([debate,reason,claim,warrant],1)

prediction = bilstm(final_representation,w,b)

cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y,logits=prediction))

train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)

当我将相同的输入放到普通的RNN体系结构中时,它可以工作。 这是可以运行的RNN代码。

def RNN(X,weights,biases):
    lstm_cell = tf.nn.rnn_cell.LSTMCell(n_hidden,use_peepholes=True)
    #final_state[0] cell state
    #final_state[1] hidden_state
    outputs,final_state = tf.nn.dynamic_rnn(lstm_cell,X,dtype=tf.float32)
    results = tf.nn.softmax(tf.matmul(final_state[1],weights)+biases)
    return results

谁能告诉我发生了什么事?我是否误解了biLSTM模型? biLSTM中不应有任何数字除以0!

谢谢。

0 个答案:

没有答案