双向lstm中的初始状态保持为零

时间:2016-09-11 06:41:02

标签: variables tensorflow bidirectional

我定义了初始状态但是当我打印它们时它们仍然是零!!!

以下是我的代码

  

    def BiRNN(x, weights, biases):
#some x shaping 
        lstm_fw_cell = rnn_cell.GRUCell(n_hidden)
        lstm_bw_cell = rnn_cell.GRUCell(n_hidden)
        init_state_fw = lstm_fw_cell.zero_state(batch_size, tf.float32)
        init_state_bw = lstm_bw_cell.zero_state(batch_size, tf.float32)
            outputs, fstate, bstate = rnn.bidirectional_rnn(lstm_fw_cell, lstm_bw_cell, x,
                                dtype=tf.float32,initial_state_fw=init_state_fw,
                                          initial_state_bw=init_state_bw)


        return [tf.matmul(outputs[-1], weights['out']) + biases['out'],initial_state_fw]

    pred = BiRNN(x, weights, biases)

    cost = tf.reduce_mean(tf.pow(pred[0] - y, 2))
    optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)


    init = tf.initialize_all_variables()

    saver = tf.train.Saver()

    # Launch the graph
    with tf.Session() as sess:
        sess.run(init)
        step = 1
        # Keep training until reach max iterations
        while step * batch_size < training_iters:
      # Start populating the filename queue.

            batch_x= example3[step%n_samples]
            batch_y=label3[step%n_samples]
            batch_x = batch_x.reshape((batch_size, n_steps, n_input))
            batch_y = batch_y.reshape((batch_size, n_steps, n_classes))

            # Run optimization op (backprop)
            sess.run(optimizer, feed_dict={x: batch_x, y: batch_y})
            w=sess.run(pred[1])
            print(w);

我希望稍后获得初始状态

我有一个需要两个不同初始状态的数据集,我希望单独训练初始状态。

0 个答案:

没有答案