线性回归烤宽面条/ Theano

时间:2016-01-16 18:22:44

标签: python machine-learning regression theano lasagne

我正在尝试使用烤宽面条进行简单的多元线性回归。 这是我的输入:

x_train = np.array([[37.93, 139.5, 329., 16.64,
                    16.81, 16.57, 1., 707.,
                    39.72, 149.25, 352.25, 16.61,
                    16.91, 16.60, 40.11, 151.5,
                    361.75, 16.95, 16.98, 16.79]]).astype(np.float32)
y_train = np.array([37.92, 138.25, 324.66, 16.28, 16.27, 16.28]).astype(np.float32)

对于这两个数据点,网络应该能够完美地学习y

以下是模型:

i1 = T.matrix()
y = T.vector()
lay1 = lasagne.layers.InputLayer(shape=(None,20),input_var=i1)
out1 = lasagne.layers.get_output(lay1)
lay2 = lasagne.layers.DenseLayer(lay1, 6, nonlinearity=lasagne.nonlinearities.linear)
out2 = lasagne.layers.get_output(lay2)
params = lasagne.layers.get_all_params(lay2, trainable=True)
cost = T.sum(lasagne.objectives.squared_error(out2, y))
grad = T.grad(cost, params)
updates = lasagne.updates.sgd(grad, params, learning_rate=0.1) 
f_train = theano.function([i1, y], [out1, out2, cost], updates=updates)

多次执行后

f_train(x_train,y_train)

成本爆炸到无穷大。知道这里出了什么问题吗?

谢谢!

1 个答案:

答案 0 :(得分:0)

网络对单个训练实例的容量太大。您需要应用一些强有力的正则化来防止培训发生分歧。或者,希望更现实,给它更复杂的训练数据(许多实例)。

使用单个实例,可以仅使用一个输入而不是20来解决任务,并禁用DenseLayer的偏差:

import numpy as np
import theano
import lasagne
import theano.tensor as T


def compile():
    x, z = T.matrices('x', 'z')
    lh = lasagne.layers.InputLayer(shape=(None, 1), input_var=x)
    ly = lasagne.layers.DenseLayer(lh, 6, nonlinearity=lasagne.nonlinearities.linear,
                                   b=None)
    y = lasagne.layers.get_output(ly)
    params = lasagne.layers.get_all_params(ly, trainable=True)
    cost = T.sum(lasagne.objectives.squared_error(y, z))
    updates = lasagne.updates.sgd(cost, params, learning_rate=0.0001)
    return theano.function([x, z], [y, cost], updates=updates)


def main():
    f_train = compile()

    x_train = np.array([[37.93]]).astype(theano.config.floatX)
    y_train = np.array([[37.92, 138.25, 324.66, 16.28, 16.27, 16.28]])\
        .astype(theano.config.floatX)

    for _ in xrange(100):
        print f_train(x_train, y_train)


main()

请注意,学习率也需要大幅降低以防止分歧。