conv1D默认进行2d卷积

时间:2020-05-28 10:48:05

标签: python pytorch cnn

因此,大多数CNN指南都将卷积解释为一维卷积,因为一系列一维内核与输入序列卷积在一起(就像传统的FIR滤波器一样)。但是,据我所知,conv1d的默认值对每个输出在所有通道上实现了卷积(本质上是2D卷积)。 如果需要传统的FIR滤波器实现,则应指定groups = in_channels。

检查重量似乎可以验证这一点:

from torch import nn

C1 = nn.Conv1d(in_channels=3, out_channels=6, kernel_size=7)
C2 = nn.Conv1d(in_channels=3, out_channels=6, kernel_size=7,groups=3)
C3 = nn.Conv2d(in_channels=3, out_channels=6, kernel_size=7)
C4 = nn.Conv2d(in_channels=3, out_channels=6, kernel_size=7, groups=3)

print(C1.weight.shape, '<-- 6 filters which convolve across two dimensions')
print(C2.weight.shape, '<-- 6 filters which convolve across one dimensions')
print(C3.weight.shape, '<-- 6 filters which convolve across three dimensions')
print(C4.weight.shape, '<-- 6 filters which convolve across two dimensions')

提供以下输出:

torch.Size([6, 3, 7]) <-- 6 filters which convolve across two dimensions
torch.Size([6, 1, 7]) <-- 6 filters which convolve across one dimensions
torch.Size([6, 3, 7, 7]) <-- 6 filters which convolve across three dimensions
torch.Size([6, 1, 7, 7]) <-- 6 filters which convolve across two dimensions

我在这个观察中错了吗?

如果是正确的话,我相信conv1d的命名是相当混乱的,因为它意味着1d卷积。

1 个答案:

答案 0 :(得分:0)

要考虑的几件事:

1)Conv1d使用1维(矢量)内核运行卷积。 C1和C2的内核大小为(7)。

2)Conv2d使用2维(矩阵)内核运行卷积。 C3和C4的内核大小分别为(7,7)。

3)组是控制输入和输出通道之间连接的一种方法,可以同时产生多个同时的卷积。

更多信息here