TypeError :('update target必须是SharedVariable',Elemwise {mul,no_inplace} .0)

时间:2015-10-12 10:50:48

标签: deep-learning

我正在尝试实施稀疏RBM,所以我坚持使用dropout。但是我总是最终得到这个错误TypeError :('update target必须是SharedVariable',Elemwise {mul,no_inplace} .0) 我的dropout实现看起来像这样

for n_ins, n_out in weight_matrix_sizes[:-1]:
            # Reuse the paramters from the dropout layer here, in a different
            # path through the graph.
            next_layer = HiddenLayer(numpy_rng=numpy_rng,
                    input=next_layer_input,
                    activation=activations[layer_counter],
                    # scale the weight matrix W with (1-p)
                    n_ins=n_ins, n_out=n_out,
                    use_bias=use_bias)
            self.layers.append(next_layer)
            next_layer_input = next_layer.output
            next_dropout_layer = DropoutHiddenLayer(numpy_rng=numpy_rng,
                    input=next_dropout_layer_input,
                    activation=activations[layer_counter],
                    n_ins=n_ins, n_out=n_out,use_bias=use_bias,
                    W=next_layer.W*(1 - dropout_rates[layer_counter]),
                    b=next_layer.b*(1 - dropout_rates[layer_counter]),
                    dropout_rate=dropout_rates[layer_counter + 1])
            self.dropout_layers.append(next_dropout_layer)
            next_dropout_layer_input = next_dropout_layer.output

            #first_layer = False
            layer_counter += 1
        # Construct an RBM that shared weights with this layer
            rbm_layer = RBM(numpy_rng=numpy_rng,
                            theano_rng=theano_rng,
                            input=next_dropout_layer_input,
                            n_visible=n_ins,
                            n_hidden=n_out,
                            W=next_dropout_layer.W)
            self.rbm_layers.append(rbm_layer)

#并且rbm中的成本更新就像这样

  cost = T.mean(self.free_energy(self.input)) - T.mean(
            self.free_energy(chain_end))
        # We must not compute the gradient through the gibbs sampling
        gparams = T.grad(cost, self.params, consider_constant=[chain_end])
        # end-snippet-3 start-snippet-4
        # constructs the update dictionary
        for gparam, param in zip(gparams, self.params):
            # make sure that the learning rate is of the right dtype
            updates[param] = param - gparam * T.cast(
                learning_rate,
                dtype=theano.config.floatX
            )

0 个答案:

没有答案