pytorch和减轻转移学习的重担?

时间:2020-03-29 11:23:10

标签: pytorch transfer-learning

class MyResNeXt(models.resnet.ResNet):
    def __init__(self, training=True):
        super(MyResNeXt, self).__init__(block=models.resnet.Bottleneck,
                                        layers=[3, 4, 6, 3], 
                                        groups=32, 
                                        width_per_group=4)
        self.load_state_dict(checkpoint)
        self.fc = nn.Linear(2048, 1)
model = MyResNeXt().to(gpu)
[k for k,v in model.named_parameters()]
freeze_until(model, "layer4.0.conv1.weight")
[k for k,v in model.named_parameters() if v.requires_grad]

我试图通过迁移学习来训练resnext。
我冻结了一些图层并进行了训练
然后保存模型

 torch.save(model.state_dict(), path + f'resnext_fullv2.pth')

在这种情况下,模型节省了整个模型的重量?或训练有素的模特体重?
我得到了错误的“找不到某些图层权重错误”


untimeError:为MyResNeXt加载state_dict时出错:丢失 state_dict中的键:“ conv1.weight”,“ bn1.weight”,“ bn1.bias”, “ bn1.running_mean”,“ bn1.running_var”,“ layer1.0.conv1.weight”, “ layer1.0.bn1.weight”,“ layer1.0.bn1.bias”, “ layer1.0.bn1.running_mean”,“ layer1.0.bn1.running_var”, “ layer1.0.conv2.weight”,“ layer1.0.bn2.weight”,“ layer1.0.bn2.bias”, “ layer1.0.bn2.running_mean”,“ layer1.0.bn2.running_var”, “ layer1.0.conv3.weight”,“ layer1.0.bn3.weight”,“ layer1.0.bn3.bias”, “ layer1.0.bn3.running_mean”,“ layer1.0.bn3.running_var”, “ layer1.0.downsample.0.weight”,“ layer1.0.downsample.1.weight”, “ layer1.0.downsample.1.bias”,“ layer1.0.downsample.1.running_mean”, “ layer1.0.downsample.1.running_var”,“ layer1.1.conv1.weight”, “ layer1.1.bn1.weight”,“ layer1.1.bn1.bias”, “ layer1.1.bn1.running_mean”,“ layer1.1.bn1.running_v

0 个答案:

没有答案