在同一pytorch模型的不同实例的某些层之间共享参数

时间:2018-09-01 16:35:59

标签: python deep-learning pytorch

我有一个多层的pytorch模型,看起来像这样

class CNN(nn.Module):
    def __init__(self):
        super(CNN).__init__()
        self.layer1 = nn.Conv2d(#parameters)
        self.layer2 = nn.Conv2d(#different_parameters)
        self.layer3 = nn.Conv2d(#other_parameters)
        self.layer4 = nn.Conv2d(#final_parameters)

    def forward(self, x):
        out1 = self.layer2(F.relu(self.layer1(x)))
        out2 = self.layer4(F.relu(self.layer3(x)))
        return torch.cat((out1, out2), 0)

然后,我想实例化此类(cnn1, cnn2)的多个实例,并在各个实例之间共享第一个路径(layer1, layer2)的参数,同时将其他参数分开。

是否有最佳/受支持的方法来做到这一点?

1 个答案:

答案 0 :(得分:0)

只需收集layer1layer2作为独立模块即可。

一个例子: model1mode2具有私有的全连接层,但共享conv2d层,即特征提取器

feature_ex = nn.Sequential(OrderedDict([('conv1', nn.Conv2d(1, 6, 5)),
                                        ('relu1', nn.ReLU()),
                                        ('maxpool1', nn.MaxPool2d((2, 2))),
                                        ('conv2', nn.Conv2d(6, 16, 5)),
                                        ('relu2', nn.ReLU()),
                                        ('maxpool2', nn.MaxPool2d(2))
                                        ]))


class Net(nn.Module):

    def __init__(self):
        super(Net, self).__init__()
        # an affine operation: y = Wx + b
        self.fc1 = nn.Linear(16 * 5 * 5, 120)
        self.fc2 = nn.Linear(120, 84)
        self.fc3 = nn.Linear(84, 10)

    def forward(self, x):
        x = feature_ex(x)   # [1]
        x = x.view(-1, self.num_flat_features(x))
        x = F.relu(self.fc1(x))
        x = F.relu(self.fc2(x))
        x = self.fc3(x)     # [2]

        return x

    def num_flat_features(self, x):
        size = x.size()[1:]  # all dimensions except the batch dimension
        num_features = 1
        for s in size:       # Get the products
            num_features *= s
        return num_features


model1 = Net()
model2 = Net()

img = torch.randn(10, 1, 32, 32)
out1 = model1.forward(img)
out2 = model2.forward(img)

# [1]
# print(np.allclose(out1.detach().numpy(), out2.detach().numpy()))
# output: True

# [2]
print(np.allclose(out1.detach().numpy(), out2.detach().numpy()))
# output: False
相关问题