pytorch。将预训练模型的权重转移到另一个模型

时间:2020-01-21 10:30:04

标签: python pytorch transfer-learning

我是python和pytorch的新手,我正在尝试将经过预训练的模型的层和权重转移到另一个模型以进行回归。

这两个模型具有相同的结构,我想转移除最后一层之外的预训练模型的层和权重。

请让我知道该怎么做。

这是我预先训练的模型代码。

class Net(torch.nn.Module):
    def __init__(self, n_feature, n_hidden1, n_hidden2, n_hidden3, n_hidden4, n_hidden5, n_output):
        super(Net, self).__init__()
        self.hidden1 = torch.nn.Linear(n_feature, n_hidden1)   
        self.hidden2 = torch.nn.Linear(n_hidden1, n_hidden2)   
        self.hidden3 = torch.nn.Linear(n_hidden2, n_hidden3)
        self.hidden4 = torch.nn.Linear(n_hidden3, n_hidden4)
        self.hidden5 = torch.nn.Linear(n_hidden4, n_hidden5)
        self.predict = torch.nn.Linear(n_hidden5, n_output)   

    def forward(self, x):
        x = torch.relu(self.hidden1(x))      
        x = torch.relu(self.hidden2(x))
        x = torch.relu(self.hidden3(x))
        x = torch.relu(self.hidden4(x))
        x = torch.relu(self.hidden5(x))
        x = self.predict(x)             
        return x


net = Net(n_feature=4, n_hidden1=100, n_hidden2=80, n_hidden3=50, n_hidden4 = 35, n_hidden5 = 20, n_output=1)
optimizer = torch.optim.Adam(net.parameters(), lr=0.001)
loss_func = torch.nn.MSELoss()

for t in range(20000):
    prediction = net(xs_train)     
    loss = loss_func(prediction, ys_train)     
    optimizer.zero_grad()   
    loss.backward()         
    optimizer.step()    

0 个答案:

没有答案