修剪一些通道后,如何从Conv层向FC(完全连接)层提供输入?

时间:2019-06-19 09:22:27

标签: deep-learning conv-neural-network pruning

我有Conv层的输出,并想将其提供给FC层。假设我有50个输出通道,其中2个通道归零。因此,FC层的输入将是48(50-2)通道。但是我的代码包含了全部50个通道,而不是48个通道。

我正在共享一个代码段。

请帮助! 非常感谢!

LeNet5(nn.Module)类:     def init (自我,W_init =无,B_init =无,Conv_chs = [20,50]):

我上面提到的50个频道

    super(LeNet5, self).__init__()

    self.conv1 = nn.Conv2d(1, Conv_chs[0], kernel_size=5)
    self.mp1   = nn.MaxPool2d(2)       

    self.conv2 = nn.Conv2d(Conv_chs[0], Conv_chs[1], kernel_size=5)        
    self.mp2   = nn.MaxPool2d(2)

    self.fc1   = nn.Linear(Conv_chs[1]*4*4, 500, bias=False)        
    self.fc2   = nn.Linear(500,10, bias=False) 

    self.Conv_chs=Conv_chs

    self.conv_layers = [self.conv1, self.conv2]
    self.fc_layers = [self.fc1, self.fc2]

    if W_init is not None:
        for i, layer in enumerate(self.conv_layers + self.fc_layers):
            layer.weight.data = torch.tensor(W_init[i], device = dev)
        for i, layer in enumerate(self.conv_layers):
            layer.bias.data = torch.tensor(B_init[i], device= dev)


def forward(self, inp):
    .........

我相信我必须在以下“修剪”功能中的某处进行更改。 W_init必须从50更改为48

def prune(净值,epsilon):     全球开发     remove_out_fltr = {k:[] for k in range(len(net.conv_layers)+1)}     Conv_chs = []     W_init = []     B_init = []

for i,layer in enumerate(net.conv_layers):
    wt = layer.weight.cpu().data.numpy() # Shape is in the form: out_ch,in_ch,h,w

    bias = layer.bias.cpu().data.numpy()
    wt_shp = wt.shape

    for ch in range(wt.shape[0]):
        prune = wt[ch,:] <= epsilon*np.ones(wt.shape[1:])
        if False not in prune:
            #remove the out channel ch

            (remove_out_fltr[i+1]).append(ch)

    for ch in range(wt.shape[1]):
        prune = wt[:,ch,:,:] <= epsilon*np.ones([wt.shape[0], wt.shape[2], wt.shape[3]])
        if False not in prune:
            #remove the in channel ch
            remove_out_fltr[i].append(ch)

    remove_out_fltr = {k:sorted(list(set(remove_out_fltr[k]))) for k in remove_out_fltr}

    print("Layer", i+1, ':', remove_out_fltr[i], remove_out_fltr[i+1])

    for k in reversed(remove_out_fltr[i+1]):
        sh = wt.shape
        wt = np.concatenate((wt[0:k,:,:,:], wt[k+1:sh[0],:,:,:]), axis=0)
        sh = bias.shape
        bias = np.concatenate((bias[0:i], bias[i+1:sh[-1]]))

    for k in reversed(remove_out_fltr[i]):
        sh = wt.shape
        wt = np.concatenate((wt[:,0:k,:,:], wt[:,k+1:sh[0],:,:]), axis=1)
    W_init.append(wt)
    B_init.append(bias)
    Conv_chs.append(wt.shape[0])

即使在这里也必须进行更改

gates = []
for i, layer in enumerate(net.fc_layers):
    wt = layer.weight.cpu().data.numpy()
    gate = np.ones(wt.shape)

    for j in range(wt.shape[0]):
        for k in range(wt.shape[1]):
            if wt[j][k] <= epsilon:
                wt[j][k] = 0.0
                gate[j][k] = 0.0
    W_init.append(wt)
    gates.append(torch.tensor(gate,device=dev))
return Conv_chs, W_init, B_init, gates

0 个答案:

没有答案