我是深度学习和Pytorch的新手。我想在我的CNN模型中看到我的过滤器,所以我想迭代我定义的CNN模型中的图层。但是我遇到如下错误。
错误
“ CNN”对象不可迭代
CNN对象是我的模型
我的迭代代码如下:
for index, layer in enumerate(self.model):
# Forward pass layer by layer
x = layer(x)
我的模型代码如下:
class CNN(nn.Module):
def __init__(self):
super(CNN,self).__init__()
self.Conv1 = nn.Sequential( # input image size (1,28,20)
nn.Conv2d(1, 16, 5, 1, 2), # outputize (16,28,20)
nn.ReLU(),
nn.MaxPool2d(2), #outputize (16,14,10)
)
self.Conv2 = nn.Sequential( # input ize ? (16,,14,10)
nn.Conv2d(16, 32, 5, 1, 2), #output size(32,14,10)
nn.ReLU(),
nn.MaxPool2d(2), #output size (32,7,5)
)
self.fc1 = nn.Linear(32 * 7 * 5, 800)
self.fc2 = nn.Linear(800,500)
self.fc3 = nn.Linear(500,10)
#self.fc4 = nn.Linear(200,10)
def forward(self,x):
x = self.Conv1(x)
x = self.Conv2(x)
x = x.view(x.size(0), -1)
x = self.fc1(x)
x = F.dropout(x)
x = F.relu(x)
x = self.fc2(x)
x = F.dropout(x)
x = F.relu(x)
x = self.fc3(x)
#x = F.relu(x)
#x = self.fc4(x)
return x
所以任何人都可以告诉我如何解决这个问题。
对不起,我的英语不好。
答案 0 :(得分:5)
首先,让我陈述一些事实,以免造成混淆。 卷积层(也称为过滤器)由内核组成。当我们说我们使用的内核大小为3或(3,3)时,内核的实际形状是3-d,而不是2d。内核的深度与卷积层的输入中的通道数匹配。例如,
输入图像形状(CxHxW):(3,128,128),现在我们应用一个Conv Layer,其输出通道数为128,内核大小为3。
self.conv1 = nn.Conv2d(in_channels=3, out_channels=128, kernel_size=8, stride = 4, padding = 2)
输出形状将为(128,32,32),
内核的形状将为(3,8,8)
过滤器的形状为(num_kernels,kernel_depth,kernel_height,kernel_width):(128、3、8、8)
过滤器中的内核数量与输出通道的数量相同。
第一层的滤镜的深度尺寸为1或3,这取决于输入是灰度图像还是彩色图像,因此很容易可视化。
# instantiate model
conv = ConvModel()
# load weights if they haven't been loaded
# skip if you're directly importing a pretrained network
checkpoint = torch.load('model_weights.pt')
conv.load_state_dict(checkpoint)
# get the kernels from the first layer
# as per the name of the layer
kernels = conv.first_conv_layer.weight.detach().clone()
#check size for sanity check
print(kernels.size())
# normalize to (0,1) range so that matplotlib
# can plot them
kernels = kernels - kernels.min()
kernels = kernels / kernels.max()
filter_img = torchvision.utils.make_grid(kernels, nrow = 12)
# change ordering since matplotlib requires images to
# be (H, W, C)
plt.imshow(filter_img.permute(1, 2, 0))
# You can directly save the image as well using
img = save_image(kernels, 'encoder_conv1_filters.png' ,nrow = 12)
答案 1 :(得分:2)
def imshow_filter(img,row,col):
print('-------------------------------------------------------------')
plt.figure()
for i in range(len(filters)):
w = np.array([0.299, 0.587, 0.114]) #weight for RGB
img = filters[i]
img = np.transpose(img, (1, 2, 0))
img = img/(img.max()-img.min())
img = np.dot(img,w)
plt.subplot(row,col,i+1)
plt.imshow(img,cmap= 'gray')
plt.xticks([])
plt.yticks([])
plt.show()
# swap color axis because
# numpy image: H x W x C
# torch image: C X H X W
filters = net.conv1.weight.data.cpu().numpy()
imshow_filter(filters)
这应该适用于您的代码
答案 2 :(得分:0)
本质上,您将需要访问模型中的特征并将这些矩阵首先转换为正确的形状,然后才能可视化过滤器
import numpy as np
import matplotlib.pyplot as plt
from torchvision import utils
def visTensor(tensor, ch=0, allkernels=False, nrow=8, padding=1):
n,c,w,h = tensor.shape
if allkernels: tensor = tensor.view(n*c, -1, w, h)
elif c != 3: tensor = tensor[:,ch,:,:].unsqueeze(dim=1)
rows = np.min((tensor.shape[0] // nrow + 1, 64))
grid = utils.make_grid(tensor, nrow=nrow, normalize=True, padding=padding)
plt.figure( figsize=(nrow,rows) )
plt.imshow(grid.numpy().transpose((1, 2, 0)))
if __name__ == "__main__":
layer = 1
filter = model.features[layer].weight.data.clone()
visTensor(filter, ch=0, allkernels=False)
plt.axis('off')
plt.ioff()
plt.show()
还有更多可视化技术,您可以研究它们here