烤宽面条,MLP零输出

时间:2015-10-24 12:42:56

标签: python neural-network lasagne

在尝试学习一些简单的MLP时,我得到了一些奇怪的结果,在从一切事物中剥离代码之后,除了必要的东西并缩小它之外,我仍然得到奇怪的结果。

代码

import numpy as np
import theano
import theano.tensor as T
import lasagne


dtype = np.float32
states = np.eye(3, dtype=dtype).reshape(3,1,1,3)
values = np.array([[147, 148, 135,147], [147,147,149,148], [148,147,147,147]], dtype=dtype)
output_dim = values.shape[1]
hidden_units = 50

#Network setup
inputs = T.tensor4('inputs')
targets = T.matrix('targets')

network = lasagne.layers.InputLayer(shape=(None, 1, 1, 3), input_var=inputs)
network = lasagne.layers.DenseLayer(network, 50, nonlinearity=lasagne.nonlinearities.rectify)
network = lasagne.layers.DenseLayer(network, output_dim)

prediction = lasagne.layers.get_output(network)
loss = lasagne.objectives.squared_error(prediction, targets).mean()
params = lasagne.layers.get_all_params(network, trainable=True)
updates = lasagne.updates.sgd(loss, params, learning_rate=0.01)

f_learn = theano.function([inputs, targets],  loss, updates=updates)
f_test = theano.function([inputs], prediction)


#Training
it = 5000
for i in range(it):
    l = f_learn(states, values)
    print "Loss: " + str(l)
    print "Expected:"
    print values
    print "Learned:"
    print f_test(states)
    print "Last layer weights:"
    print lasagne.layers.get_all_param_values(network)[-1]

我希望网络能够学习“价值观”中给出的价值观。变量,通常它确实如此,但同样经常使一些输出节点留下零和巨大的损失。

示例输出

Loss: 5426.83349609
Expected:
[[ 147.  148.  135.  147.]
 [ 147.  147.  149.  148.]
 [ 148.  147.  147.  147.]]
Learned:
[[ 146.99993896    0. 134.99993896  146.99993896]
 [ 146.99993896    0. 148.99993896  147.99993896]
 [ 147.99995422    0. 146.99996948  146.99993896]]
Last layer weights:
[ 11.40957355   0. 11.36747837  10.98625183]

为什么会这样?

1 个答案:

答案 0 :(得分:0)

我在烤宽面条谷歌小组中问了同样的问题,我在那里更幸运:https://groups.google.com/forum/#!topic/lasagne-users/ock-2RqTaFk 将recitfier单位改为容忍负输出的非线性有助于。