为什么MLP的y_predict结果会收敛

时间:2019-08-17 06:24:12

标签: python machine-learning neural-network

深度学习是使用MLP完成的。 train_epoch3是训练函数,test_ch3是计算test_error的函数。

我将初始化更改为Xavier,num层,num单位。并应用了正则化,辍学等操作,但结果与初始测试错误相似。因此,我接受了测试的预测,并且可以看到只有一个结果。你为什么这样做?

#train
def train_epoch_ch3(net, train_iter, loss, updater):
..omission...

for X,y in train_iter:
    with autograd.record():
        y_hat= net(X)
        l= loss(y_hat, y)
    l.backward()
    updater(X.shape[0])
    a1= [l.sum().asscalar(), y.size]

    metric=[a+b for a,b in zip(metric, a1)]
return metric[0]/metric[1]    
return metric[0]/metric[1], yhat, label

def train_ch3(net, train_iter, test_iter, loss, num_epochs, updater):
trains, test_accs=[],[]
test_acc=1
train_metrics= train_epoch_ch3(net, train_iter, loss, updater)
test_acc,pred,label= test_ch3(net, test_iter)

print("train_loss: %f,  test_lose: %f"%(train_metrics, test_acc))


# net

batch_size=28
net = nn.Sequential()
net.add(nn.Dense(88, activation='relu'))
net.add(nn.Dense(70, activation='relu'), nn.Dropout(0.5))
net.add(nn.Dense(40, activation='relu'), nn.Dropout(0.5))
net.add(nn.Dense(20, activation='relu'), nn.Dropout(0.5))
net.add(nn.Dense(10, activation='relu'), nn.Dropout(0.5))
net.add(nn.Dense(1))
net.initialize(init=init.Xavier(),force_reinit=True) 

loss = gluon.loss.L2Loss()
trainer = gluon.Trainer(net.collect_params(), 'adam', {'learning_rate': 0.01})
pred, label=train_ch3(net, train_iter, test_iter, loss, num_epochs, trainer)        
  • y_hat的结果:[[1.0527818], [1.0527818],....., [1.0527818]]

  • 标签的结果:[1.2405351 1.6082993..... 2.0049977 1.3558859 1.2837434 1.5723133 1.6031723 2.4376822 1.1756148 1.9688412]

0 个答案:

没有答案