多输入线性回归无法预测大方差的输出

时间:2020-08-13 07:50:48

标签: python deep-learning neural-network pytorch linear-regression

我成功地构建了具有1个输入和1个输出的线性回归神经网络。

我正在构建一个具有5个输入和1个输出的线性回归神经网络。

以下是公式: y = 3e + d ^ 2 + 9c + 11b ^ 6 + a + 19

但是,无论我使用多少神经元,历元和隐藏层,我都无法预测出良好的结果。 预测输出始终在较小范围内。但是,预期产出之间存在很大差异。 Predicted output vs Expected output

我想可能是由于选择了激活函数,损失函数和优化器。 如果不是这样,则多输入神经网络可能需要替代方法来构建。

这是我的代码:

import torch
import torch.nn as nn    #neural network model
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import torch.nn.functional as F
from torch.autograd import Variable
from sklearn.preprocessing import MinMaxScaler
from pickle import dump

#Load datasets
dataset = pd.read_csv('testB_200.csv')

X = dataset.iloc[:, :-1].values
Y = dataset.iloc[:, -1:].values
X_scaler = MinMaxScaler()
Y_scaler = MinMaxScaler()
print(X_scaler.fit(X))
print(Y_scaler.fit(Y))
X = X_scaler.transform(X)
Y = Y_scaler.transform(Y)

#save the scaler
dump(X_scaler, open('X_scaler.pkl', 'wb'))
dump(Y_scaler, open('Y_scaler.pkl', 'wb'))

train = int((len(dataset)+1)*0.8)
test = train + 1
print(train)
print(test)
x_temp_train = X[:train]
y_temp_train = Y[:train]
x_temp_test = X[test:]
y_temp_test = Y[test:]

X_train = torch.FloatTensor(x_temp_train)
Y_train = torch.FloatTensor(y_temp_train)
X_test = torch.FloatTensor(x_temp_test)
Y_test = torch.FloatTensor(y_temp_test)

D_in = 5 # D_in is input features
H = 12 # H is hidden dimension
H2 =8
H3 =4

D_out = 1 # D_out is output features.

#Define a Artifical Neural Network model
class Net(nn.Module):
#------------------3 hidden Layers------------------------------
    def __init__(self, D_in, H, H2, H3, D_out):
        super(Net, self).__init__()

        self.linear1 = nn.Linear(D_in, H)  
        self.linear2 = nn.Linear(H, H2)
        self.linear3 = nn.Linear(H2, H3)
        self.linear4 = nn.Linear(H3, D_out)
        
    def forward(self, x):
        #activation function should be used here  e.g: hidden = F.relu(...)
        h_relu = self.linear1(x).clamp(min=0) #min=0 is like ReLU
        middle = self.linear2(h_relu).clamp(min=0)
        middle2 = self.linear3(middle).clamp(min=0)
        prediction = self.linear4(middle2)

        return prediction
model = Net(D_in, H, H2, H3, D_out)
print(model)

#Define a Loss function and optimizer
criterion = torch.nn.MSELoss()
optimizer = torch.optim.SGD(model.parameters(), lr=0.2) #2e-7, lr=learning rate=0.2

#Training model
inputs = Variable(X_train)
outputs = Variable(Y_train)
inputs_val = Variable(X_test)
outputs_val = Variable(Y_test)
loss_values = []
val_values = []
epoch = []
epoch_value=25

for i in range(epoch_value):
    for phase in ['train', 'val']:
        if phase == 'train':
            #print('train loss')
            model.train()  # Set model to training mode
            prediction = model(inputs)
            loss = criterion(prediction, outputs) 
            #print(loss)
            loss_values.append(loss.item())
            optimizer.zero_grad() #zero the parameter gradients
            epoch.append(i)
            loss.backward()       #compute gradients(dloss/dx)
            optimizer.step()      #updates the parameters
        elif phase == 'val':
            #print('validation loss')        
            model.eval()   # Set model to evaluate mode
            prediction_val = model(inputs_val)
            loss_val = criterion(prediction_val, outputs_val) 
            #print(loss_val)
            val_values.append(loss_val.item())
            optimizer.zero_grad() #zero the parameter gradients

torch.save(model.state_dict(), 'formula2.pth')  #save model

#Plot train_loss vs validation loss
plt.plot(epoch,loss_values)
plt.plot(epoch, val_values)
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train','validation'], loc='upper left')
plt.show()

#plot prediciton vs expected value
prediction_val = prediction_val.detach().numpy()
prediction_val = Y_scaler.inverse_transform(prediction_val)
#print('predict')
#print(prediction_val)
Y_test = Y_scaler.inverse_transform(Y_test)
#print('test')
#print(Y_test)
plt.plot(Y_test)
plt.plot(prediction_val)
plt.legend(['expected','predict'], loc='upper left')
plt.show()

Model Loss vs Validation Loss

Validation vs Expected outputs

感谢您的时间。

0 个答案:

没有答案