pytorch 中的模型损失没有变化

时间:2021-02-13 15:17:04

标签: pytorch

我正在用 pytorch 处理巨大的数据
这些是我的模型和训练代码

import torch.nn.functional as F

class Net(torch.nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.fc1_1=torch.nn.Linear(13, 512)
        self.fc1_2=torch.nn.Linear(512, 64)
        self.fc1_3=torch.nn.Linear(64, 10)

        self.fc2_1=torch.nn.Linear(13, 64)
        self.fc2_2=torch.nn.Linear(64, 512)
        self.fc2_3=torch.nn.Linear(512, 10)

        self.fc3_1=torch.nn.Linear(13, 128)
        self.fc3_2=torch.nn.Linear(128, 128)
        self.fc3_3=torch.nn.Linear(128, 10)

        self.fc_full_1=torch.nn.Linear(30, 64)
        self.fc_full_2=torch.nn.Linear(64, 128)
        self.fc_full_3=torch.nn.Linear(128, 2)

    def forward(self, x):
        x1=self.fc1_1(x)
        x1=F.relu(x1)
        x1=self.fc1_2(x1)
        x1=F.relu(x1)
        x1=self.fc1_3(x1)
        x1=F.relu(x1)

        x2=self.fc2_1(x)
        x2=F.relu(x2)
        x2=self.fc2_2(x2)
        x2=F.relu(x2)
        x2=self.fc2_3(x2)
        x2=F.relu(x2)

        x3=self.fc3_1(x)
        x3=F.relu(x3)
        x3=self.fc3_2(x3)
        x3=F.relu(x3)
        x3=self.fc3_3(x3)
        x3=F.relu(x3)


        x=torch.cat((x1, x2, x3), dim=1)
        x=self.fc_full_1(x)
        x=F.relu(x)
        x=self.fc_full_2(x)
        x=F.relu(x)
        x=self.fc_full_3(x)

        return x

model=Net()

如上所示,它们只是全连接层 模型损失函数和优化 交叉熵损失和亚当

criterion = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model1.parameters(), lr=0.05)

这些是训练代码

for epoch in range(100):
    model.train()
    x_var = Variable(torch.FloatTensor(x_train))
    y_var = Variable(torch.LongTensor(y_train))

    optimizer.zero_grad()
    train_pred = model(x_var)
    loss =criterion(train_pred, y_var)
    loss.backward()
    optimizer.step()

    train_acc=calc_accuracy(train_pred, y_var)
    loss=loss.data.numpy()

最后,打印的准确性和损失

Epoch  0
0.6900209 0.531578947368421
valid:   0.692668 0.4621212121212121
Epoch  10
0.6900209 0.531578947368421
valid:   0.692668 0.4621212121212121
Epoch  20
0.6900209 0.531578947368421
valid:   0.692668 0.4621212121212121
Epoch  30
0.6900209 0.531578947368421
valid:   0.692668 0.4621212121212121
Epoch  40
0.6900209 0.531578947368421
valid:   0.692668 0.4621212121212121
Epoch  50
0.6900209 0.531578947368421
valid:   0.692668 0.4621212121212121
Epoch  60
0.6900209 0.531578947368421
valid:   0.692668 0.4621212121212121
Epoch  70
0.6900209 0.531578947368421
valid:   0.692668 0.4621212121212121
Epoch  80
0.6900209 0.531578947368421
valid:   0.692668 0.4621212121212121
Epoch  90
0.6900209 0.531578947368421
valid:   0.692668 0.4621212121212121

如上所示,模型训练损失和有效损失根本没有变化。 似乎是什么问题?

1 个答案:

答案 0 :(得分:1)

您的优化器不使用您的 model 的参数,而是使用其他一些 model1 的参数。

optimizer = torch.optim.Adam(model1.parameters(), lr=0.05)

顺便说一句,您不必为每个时代都使用 model.train()