我正在尝试可视化用千层面构建的神经网络的输出。 特别是我修改了关于mnist示例的代码:https://github.com/Lasagne/Lasagne/blob/master/examples/mnist.py
在第299行我插入了以下代码行:
input_var=inputs
prediction=lasagne.layers.get_output(network,input_var)
print(prediction.eval())
sys.exit('debug')
如果我们选择模型' mlp'在第234行:
def main(model='mlp', num_epochs=500):
同时,选择模型' cnn'通过改变第234行如下:
def main(model='cnn', num_epochs=500):
行
print(prediction.eval())
给出错误:
Traceback (most recent call last):
File "/dos/mnist_lasagne_original.py", line 364, in <module>
main(**kwargs)
File "/dos/mnist_lasagne_original.py", line 299, in main
print(prediction.eval())
File "/usr/local/lib/python2.7/dist-packages/theano/gof/graph.py", line 523, in eval
rval = self._fn_cache[inputs](*args)
File "/usr/local/lib/python2.7/dist-packages/theano/compile/function_module.py", line 871, in __call__
storage_map=getattr(self.fn, 'storage_map', None))
File "/usr/local/lib/python2.7/dist-packages/theano/gof/link.py", line 314, in raise_with_op
reraise(exc_type, exc_value, exc_trace)
File "/usr/local/lib/python2.7/dist-packages/theano/compile/function_module.py", line 859, in __call__
outputs = self.fn()
ValueError: CorrMM received weight with wrong type.
Apply node that caused the error: CorrMM{valid, (1, 1)}(TensorConstant{[[[[ 0. 0..0. 0.]]]]}, Subtensor{::, ::, ::int64, ::int64}.0)
Toposort index: 8
Inputs types: [TensorType(float32, (False, True, False, False)), TensorType(float64, 4D)]
Inputs shapes: [(500, 1, 28, 28), (32, 1, 5, 5)]
Inputs strides: [(3136, 3136, 112, 4), (200, 200, -40, -8)]
Inputs values: ['not shown', 'not shown']
Outputs clients: [[Elemwise{Composite{(i0 * (Abs((i1 + i2)) + i1 + i2))}}(TensorConstant{(1, 1, 1, 1) of 0.5}, CorrMM{valid, (1, 1)}.0, InplaceDimShuffle{x,0,x,x}.0)]]
Backtrace when the node is created(use Theano flag traceback.limit=N to make it longer):
File "/dos/mnist_lasagne_original.py", line 364, in <module>
main(**kwargs)
File "/dos/mnist_lasagne_original.py", line 298, in main
prediction=lasagne.layers.get_output(network,input_var)
File "/home/paul/src/lasagne/lasagne/layers/helper.py", line 185, in get_output
all_outputs[layer] = layer.get_output_for(layer_inputs, **kwargs)
File "/home/paul/src/lasagne/lasagne/layers/conv.py", line 257, in get_output_for
conved = self.convolve(input, **kwargs)
File "/home/paul/src/lasagne/lasagne/layers/conv.py", line 535, in convolve
filter_flip=self.flip_filters)
我有很多谷歌,我无法弄清楚这个问题的根源。我有兴趣可视化神经网络的输出,以了解它是如何工作的。 任何帮助将不胜感激。
答案 0 :(得分:3)
在这里阅读:http://lasagne.readthedocs.io/en/latest/user/layers.html#propagating-data-through-layers
我找到了以下解决方案(那些调试行插入到原始代码的第299行):
x = theano.tensor.tensor4('x')
y = lasagne.layers.get_output(network, x)
f = theano.function([x], y)
output=f(inputs)
print(output)
sys.exit('debug')