BasicRNNCell中的内部变量

时间:2017-12-25 02:23:37

标签: python tensorflow rnn

我有以下示例代码来测试BasicRNNCell。我想获取其内部矩阵,以便我可以使用自己的代码计算output_resnewstate_res的值,以确保我可以重现output_res的值, newstate_res

在tensorflow源代码中,它表示output = new_state = act(W * input + U * state + B)。有人知道如何获得WU吗? (我尝试访问cell._kernel,但它不可用。)

$ cat ./main.py
#!/usr/bin/env python
# vim: set noexpandtab tabstop=2 shiftwidth=2 softtabstop=-1 fileencoding=utf-8:

import tensorflow as tf
import numpy as np

batch_size = 4
vector_size = 3

inputs = tf.placeholder(
        tf.float32
        , [batch_size, vector_size]
        )

num_units = 2
state = tf.zeros([batch_size, num_units], tf.float32)

cell = tf.contrib.rnn.BasicRNNCell(num_units=num_units)
output, newstate = cell(inputs = inputs, state = state)

X = np.zeros([batch_size, vector_size])
#X = np.ones([batch_size, vector_size])
with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())

    output_res, newstate_res = sess.run([output, newstate], feed_dict = {inputs: X})
    print(output_res)
    print(newstate_res)
sess.close()

$ ./main.py
[[ 0.  0.]
 [ 0.  0.]
 [ 0.  0.]
 [ 0.  0.]]
[[ 0.  0.]
 [ 0.  0.]
 [ 0.  0.]
 [ 0.  0.]]

1 个答案:

答案 0 :(得分:4)

简短回答:你在cell._kernel之后认出你了。这里有一些使用variables属性来获取内核(和偏差)的代码,该属性位于大多数TensorFlow RNN中:

import tensorflow as tf
import numpy as np

batch_size = 4
vector_size = 3
inputs = tf.placeholder(tf.float32, [batch_size, vector_size])

num_units = 2
state = tf.zeros([batch_size, num_units], tf.float32)

cell = tf.contrib.rnn.BasicRNNCell(num_units=num_units)
output, newstate = cell(inputs=inputs, state=state)

print("Output of cell.variables is a list of Tensors:")
print(cell.variables)
kernel, bias = cell.variables

X = np.zeros([batch_size, vector_size])
with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())

    output_, newstate_, k_, b_ = sess.run(
        [output, newstate, kernel, bias], feed_dict = {inputs: X})
    print("Output:")
    print(output_)
    print("New State == Output:")
    print(newstate_)
    print("\nKernel:")
    print(k_)
    print("\nBias:")
    print(b_)

输出

Output of cell.variables is a list of Tensors:
[<tf.Variable 'basic_rnn_cell/kernel:0' shape=(5, 2) dtype=float32_ref>, 
<tf.Variable 'basic_rnn_cell/bias:0' shape=(2,) dtype=float32_ref>]
Output:
[[ 0.  0.]
 [ 0.  0.]
 [ 0.  0.]
 [ 0.  0.]]
New State == Output:
[[ 0.  0.]
 [ 0.  0.]
 [ 0.  0.]
 [ 0.  0.]]

Kernel:
[[ 0.41417515 -0.64997244]
 [-0.40868729 -0.90995187]
 [ 0.62134564 -0.88962835]
 [-0.35878009 -0.25680023]
 [ 0.35606658 -0.83596271]]

Bias:
[ 0.  0.]

答案很长:你也问过如何获得W和U.让我复制call的实现并讨论W和U的位置。

def call(self, inputs, state):
     """Most basic RNN: output = new_state = act(W * input + U * state + B)."""

    gate_inputs = math_ops.matmul(
        array_ops.concat([inputs, state], 1), self._kernel)
    gate_inputs = nn_ops.bias_add(gate_inputs, self._bias)
    output = self._activation(gate_inputs)
    return output, output

不看起来像那里有W和U,但他们在那里。本质上,内核的第一行vector_size是W,内核的下一行num_units行是U.也许在LaTeX中查看元素数学是有帮助的:

w and u inside k

我使用 m 作为通用批处理索引, v 作为vector_size n 作为{{ 1}}和 b num_units [; ] 表示连接。由于TensorFlow是批量主要的,因此实现通常使用右乘乘矩阵。

由于这是一个非常基本的RNN,batch_size。 &#34;历史&#34;对于下一次迭代,只是当前迭代的输出。