将符号theano.tensor传递给已编译的theano.function

时间:2014-12-11 01:39:48

标签: python theano

我尝试重构代码,以便更轻松地更改架构。目前,我正如下构建一个递归神经网络。

# input (where first dimension is time)
x = T.matrix()
# target (where first dimension is time)
t = T.matrix()

# recurrent weights as a shared variable
W_hh = theano.shared(numpy.random.uniform(size=(n, n), low=-.01, high=.01))
# input to hidden layer weights
W_hx = theano.shared(numpy.random.uniform(size=(n, nin), low=-.01, high=.01))
# hidden to output layer weights
W_yh = theano.shared(numpy.random.uniform(size=(nout, n), low=-.01, high=.01))
# hidden layer bias weights
b_h = theano.shared(numpy.zeros((n)))
# output layer bias weights
b_y = theano.shared(numpy.zeros((nout)))
# initial hidden state of the RNN
h0 = theano.shared(numpy.zeros((n)))

# recurrent function
def step(x_t, h_tm1):
    h_t = T.nnet.sigmoid(T.dot(W_hx, x_t) + T.dot(W_hh, h_tm1) + b_h)
    y_t = T.nnet.sigmoid(T.dot(W_yh, h_t) + b_y)
    return h_t, y_t

# loop over the recurrent function for the entire sequence
[h, y], _ = theano.scan(step,
                        sequences=x,
                        outputs_info=[h0, None])

# predict function outputs y for a given x
predict = theano.function(inputs=[x,], outputs=y)

这很好用。但是这个实现的问题在于我必须对权重进行硬编码,并确保每次更改体系结构时所有数学都是正确的。受Multilayer Perceptron tutorial的启发,我尝试通过引入Layer类来重构我的代码。

class Layer:
    def __init__(self, inputs=[], nins=[], nout=None, Ws=[], b=None, activation=T.tanh):
        """
        inputs:               an array of theano symbolic vectors
        activation:           the activation function for the hidden layer
        nins, nouts, Ws, bs:  either pass the dimensions of the inputs and outputs, or pass
                              the shared theano tensors for the weights and bias.
        """
        n = len(inputs)
        assert(n is not 0)

        self.inputs = inputs
        self.activation = activation

        # create the shared weights if necessary
        if len(Ws) is 0:
            assert(len(nins) is n)
            assert(nout is not None)
            for i in range(n):
                input = inputs[i]
                nin = nins[i]
                W = theano.shared(
                    numpy.random.uniform(
                        size=(nout, nin),
                        low=-numpy.sqrt(6. / (nin + nout)),
                        high=numpy.sqrt(6. / (nin + nout))
                    ),
                )
                Ws.append(W)

        # create the shared biases if necessary
        if b is None:
            assert(nout is not None)
            b = theano.shared(numpy.zeros((nout,)))

        self.Ws = Ws
        self.b = b
        self.params = self.Ws + [b]
        self.weights = Ws

        linear = self.b
        for i in range(n):
            linear += T.dot(self.Ws[i], self.inputs[i])

        if self.activation:
            self.output = self.activation(linear)
        else:
            self.output = linear

这使我能够编写更清晰,更不容易出错的RNN代码,并且更容易 改变架构。

# one step of the input
x = T.vector()
# the previous hidden layer
h_tm1 = T.vector()

# the input and the hidden layer go into the input layer
hiddenLayer = Layer(inputs=[x, h_tm1],
                    nins=[nin, n],
                    nout=n,
                    activation=T.nnet.sigmoid)

# the hidden layer vector
h = hiddenLayer.output

# the hidden layer output goes to the output
outputLayer = Layer(inputs=[h],
                    nins=[n],
                    nout=nout,
                    activation=T.nnet.sigmoid)

# the output layer vector
y = outputLayer.output

# recurrent function
step = theano.function(inputs=[x, h_tm1],
                       outputs=[h, y])

# next we need to scan over all steps for a given array of observations
# input (where first dimension is time)
Xs = T.matrix()
# initial hidden state of the RNN
h0 = theano.shared(numpy.zeros((n)))

# loop over the recurrent function for the entire sequence
[Hs, Ys], _ = theano.scan(step,
                        sequences=Xs,
                        outputs_info=[h0, None])

# predict function outputs y for a given x
predict = theano.function(inputs=[Xs,], outputs=Ys)

但是,当我运行程序时出现错误

TypeError: ('Bad input argument to theano function at index 0(0-based)', 'Expected an array-like object, but found a Variable: maybe you are trying to call a function on a (possibly shared) variable instead of a numeric array?')

这里的问题是扫描操作传递了一个符号变量(Xs的子传感器) 到编译的步骤函数。

重构我的代码的全部意义在于我不必定义步骤函数中的所有计算。现在我留下了4个符号变量(xh_tm1hy),它们定义了我需要用{{{{}}扫描的计算图的一部分。 1}}。但是,我不确定如何执行此操作,因为Xs无法接受符号变量。

以下是我使用exponentiation example尝试做的简化示例。

theano.function

如何解决此错误?

2 个答案:

答案 0 :(得分:0)

您基本上不能将已编译的Theano函数用作扫描操作。

解决这个问题的方法是让你的Layer类有一个函数返回一个函数,该函数构建你的计算树,然后你可以用它来编译扫描操作。

答案 1 :(得分:0)

因此,解决方案是将theano.clonereplaces关键字参数一起使用。例如,在取幂示例中,您可以按如下方式定义步骤函数:

def step(p, a):
    replaces = {prior_result: p, A: a}
    n = theano.clone(next_result, replace=replaces)
    return n