固定Gabor滤波器卷积神经网络

时间:2019-04-09 11:57:40

标签: image-processing keras conv-neural-network pytorch gabor-filter

我正在尝试使用一些conv层构建CNN,其中该层中的一半过滤器是固定的,另一半在训练模型时可以学习。但是我什么都没找到。

我正在尝试做的事情与他们在本文中所做的https://arxiv.org/pdf/1705.04748.pdf

在Pytorch的Keras中有没有办法做到这一点...

2 个答案:

答案 0 :(得分:1)

这是我在Stack Exchange上提出的一个与您相关的问题,您可以参考并找到其他信息。

为避免必须构建允许部分冻结的自定义图层,也许最好创建两层,其中一层冻结而另一层不冻结。然后,下一层可以连接到它们两者,并且网络的其余部分将是相同的。然后,您可以使用一些转移学习,并将第一层从预先训练的网络转移到冻结层。为此,您可以使用Keras functional API

这是一个简单的示例,说明如何进行此工作。

from tensorflow.python.keras import layers, Model
from tensorflow.python.keras.applications import InceptionV3

# Sample CNN
input_layer = layers.Input(shape=(224, 224, 3))
frozen_layer = layers.Conv2D(32, kernel_size=(3, 3), use_bias=False, trainable=False, name="frozen_layer")(input_layer)
thawed_layer = layers.Conv2D(32, kernel_size=(3, 3), trainable=True)(input_layer)
concat = layers.concatenate([frozen_layer, thawed_layer])
another_layer = layers.Conv2D(64, kernel_size=(3, 3), trainable=True)(concat)
output_layer = layers.Dense(10)(another_layer)
model = Model(inputs=[input_layer], outputs=[output_layer])

# Build a pre-trained model to extract weights from
transfer_model = InceptionV3(weights='imagenet', include_top=False)

assert transfer_model.layers[1].get_weights()[0].shape == model.get_layer(name="frozen_layer").get_weights()[0].shape

# Transfer the weights 
model.get_layer(name="frozen_layer").set_weights(transfer_model.layers[1].get_weights())

答案 1 :(得分:1)

好的。在PyTorch中,您可以使用nn.Conv2d

  1. 手动将其weight参数设置为所需的过滤器
  2. 将这些权重排除在学习范围之外

一个简单的例子是:

import torch
import torch.nn as nn

class Model(nn.Module):
    def __init__(self):
        super(Model, self).__init__()

        self.conv_learning = nn.Conv2d(1, 5, 3, bias=False)
        self.conv_gabor = nn.Conv2d(1, 5, 3, bias=False)
        # weights HAVE TO be wrapped in `nn.Parameter` even if they are not learning
        self.conv_gabor.weight = nn.Parameter(torch.randn(1, 5, 3, 3))

    def forward(self, x):
        y = self.conv_learning(x)
        y = torch.sigmoid(y)
        y = self.conv_gabor(y)

        return y.mean()

model = Model()
xs = torch.randn(10, 1, 30, 30)
ys = torch.randn(10)
loss_fn = nn.MSELoss()

# we can exclude parameters from being learned here, by filtering them
# out based on some criterion. For instance if all your fixed filters have
# "gabor" in name, the following will do
learning_parameters = (param for name, param in model.named_parameters()
                             if 'gabor' not in name)
optim = torch.optim.SGD(learning_parameters, lr=0.1)

epochs = 10
for e in range(epochs):
    y = model(xs)
    loss = loss_fn(y, ys)

    model.zero_grad()
    loss.backward()
    optim.step()