回归损失函数不正确

时间:2017-08-03 16:48:25

标签: python machine-learning deep-learning pytorch

我正在尝试一个基本的平均示例,但验证和丢失不匹配,如果我增加训练时间,网络就无法收敛。我正在训练一个有2个隐藏层的网络,每个500个单位宽,来自范围[0,9]的三个整数,学习率为1e-1,Adam,批量大小为1,丢失为3000次迭代并验证每个100次迭代。如果标签和假设之间的绝对差值小于阈值,这里我将阈值设置为1,我认为是正确的。如果这是选择丢失功能,Pytorch出错或者我正在做的事情的问题,有人可以告诉我。以下是一些情节:

val_diff = 1
acc_diff = torch.FloatTensor([val_diff]).expand(self.batch_size)

在验证期间循环100次:

num_correct += torch.sum(torch.abs(val_h - val_y) < acc_diff)

在每个验证阶段后附加:

validate.append(num_correct / total_val)

以下是(假设和标签)的一些例子:

[...(-0.7043088674545288, 6.0), (-0.15691305696964264, 2.6666667461395264),
 (0.2827358841896057, 3.3333332538604736)]

我在API中尝试了六种通常用于回归的损失函数:

torch.nn.L1Loss(size_average =假) enter image description here

torch.nn.L1Loss() enter image description here

torch.nn.MSELoss(size_average =假) enter image description here

torch.nn.MSELoss() enter image description here

torch.nn.SmoothL1Loss(size_average =假) enter image description here

torch.nn.SmoothL1Loss() enter image description here

感谢。

网络代码:

class Feedforward(nn.Module):
    def __init__(self, topology):
        super(Feedforward, self).__init__()
        self.input_dim     = topology['features']
        self.num_hidden    = topology['hidden_layers']
        self.hidden_dim    = topology['hidden_dim']
        self.output_dim    = topology['output_dim']
        self.input_layer   = nn.Linear(self.input_dim, self.hidden_dim)
        self.hidden_layer  = nn.Linear(self.hidden_dim, self.hidden_dim)
        self.output_layer  = nn.Linear(self.hidden_dim, self.output_dim)
        self.dropout_layer = nn.Dropout(p=0.2)


    def forward(self, x):
        batch_size = x.size()[0]
        feat_size  = x.size()[1]
        input_size = batch_size * feat_size

        self.input_layer = nn.Linear(input_size, self.hidden_dim).cuda()
        hidden = self.input_layer(x.view(1, input_size)).clamp(min=0)

        for _ in range(self.num_hidden):
            hidden = self.dropout_layer(F.relu(self.hidden_layer(hidden)))

        output_size = batch_size * self.output_dim
        self.output_layer = nn.Linear(self.hidden_dim, output_size).cuda()
        return self.output_layer(hidden).view(output_size)

培训代码:

def train(self):
    if self.cuda:
        self.network.cuda()

    dh        = DataHandler(self.data)
    # loss_fn = nn.L1Loss(size_average=False)
    # loss_fn = nn.L1Loss()
    # loss_fn = nn.SmoothL1Loss(size_average=False)
    # loss_fn = nn.SmoothL1Loss()
    # loss_fn = nn.MSELoss(size_average=False)
    loss_fn   = torch.nn.MSELoss()
    losses    = []
    validate  = []
    hypos     = []
    labels    = []
    val_size  = 100
    val_diff  = 1
    total_val = float(val_size * self.batch_size)

    for i in range(self.iterations):
        x, y = dh.get_batch(self.batch_size)
        x = self.tensor_to_Variable(x)
        y = self.tensor_to_Variable(y)

        self.optimizer.zero_grad()
        loss = loss_fn(self.network(x), y)
        loss.backward()
        self.optimizer.step()

1 个答案:

答案 0 :(得分:1)

看起来你已经误解了pytorch中的图层是如何工作的,这里有一些提示:

  • 当您执行nn.Linear(...)时,您正在确定新图层,而不是使用您在网络__init__中预先定义的图层。因此,它不能学习任何东西,因为重量不断重新赋予。

  • 您不应该在.cuda()内拨打net.forward(...),因为您已经通过调用{{1}已经在train的gpu上复制了网络}}

  • 理想情况下,self.network.cuda()输入应该直接具有第一层的形状,因此您不必修改它。在这里你应该有net.forward(...)

你的前锋应该接近这个:

x.size() <=> Linear -- > (Batch_size,  Features)