使用Pytorch的多任务回归问题(问题:所有测试数据的输出相同)

时间:2019-08-05 20:10:47

标签: python neural-network pytorch multitasking non-linear-regression

我正在尝试多任务回归问题。

输入形状:200 * 60000,输出形状:200 * 3(此处200 =数据总数,60000 =要素数量)

因此,对于每个数据点,我必须预测3个值(连续)。

示例代码:


class Classifier(nn.Module):
    def __init__(self,input_nodes):
        super(Classifier, self).__init__()
        self.input_nodes = input_nodes

        self.sharedlayer = nn.Sequential(
            nn.Linear(input_nodes, 300),
            nn.ReLU(),
            nn.Dropout(),
            nn.Linear(300, 100),
            nn.ReLU(),
            nn.Dropout(),
        )


        self.att1 = nn.Sequential(
            nn.Linear(100, 40),
            nn.ReLU(),
            nn.Dropout(),
            nn.Linear(40, 20),
            nn.ReLU(),
            nn.Dropout(),
            nn.Linear(20, 1)
        )
        self.att2 = nn.Sequential(
            nn.Linear(100, 40),
            nn.ReLU(),
            nn.Dropout(),
            nn.Linear(40, 20),
            nn.ReLU(),
            nn.Dropout(),
            nn.Linear(20, 1)
        )
        self.att3 = nn.Sequential(
            nn.Linear(100, 40),
            nn.ReLU(),
            nn.Dropout(),
            nn.Linear(40, 20),
            nn.ReLU(),
            nn.Dropout(),
            nn.Linear(20, 1)
        )

    def forward(self, x):

        h_shared = self.sharedlayer(x)
        out1 = self.att1(h_shared)
        out2 = self.att2(h_shared)
        out3 = self.att3(h_shared)

        return out1, out2, out3

criterion = nn.MSELoss()
optimizer = optim.Adam(model.parameters(), lr=0.01)

for epoch in range(n_epochs):
            running_loss = 0
            i = 0
            model.train()
            for data, label in trainloader:
                 i = i + 1
                 out1, out2, out3 = model(data)


                 l1 = criterion(out1, label[:,0].view(-1,1))    
                 l2 = criterion(out2, label[:,1].view(-1,1))
                 l3 = criterion(out3, label[:,2].view(-1,1))    

                 loss = (l1 + l2 + l3)
                 optimizer.zero_grad()
                 loss.backward()
                 optimizer.step()

  
    

问题:模型对于所有测试数据总是产生相同的值。

  
     

示例:假设有3个测试数据:

     

对于输出1:3.5 3.5 3.5

     

对于输出2:9.5 9.5 9.5

     

对于输出3:0.2 0.2 0.2

能否请您帮我弄清楚问题出在哪里?

为什么它为所有测试数据生成相同的值?

0 个答案:

没有答案