PyTorch输入的维数不是模型所期望的,我不确定为什么。
据我了解...
in_channels
首先是我们要传递给模型的一维输入的数量,并且是所有后续图层的前一个out_channel。
out_channels
是所需的内核(过滤器)数量。
kernel_size
是每个过滤器的参数数。
因此,我们希望,随着数据向前传递,具有7个1D通道(即2D输入)的数据集。
但是,以下代码在此代码中引发了与我期望的不一致的错误:
import numpy
import torch
X = numpy.random.uniform(-10, 10, 70).reshape(-1, 7)
# Y = np.random.randint(0, 9, 10).reshape(-1, 1)
class Simple1DCNN(torch.nn.Module):
def __init__(self):
super(Simple1DCNN, self).__init__()
self.layer1 = torch.nn.Conv1d(in_channels=7, out_channels=20, kernel_size=5, stride=2)
self.act1 = torch.nn.ReLU()
self.layer2 = torch.nn.Conv1d(in_channels=20, out_channels=10, kernel_size=1)
def forward(self, x):
x = self.layer1(x)
x = self.act1(x)
x = self.layer2(x)
log_probs = torch.nn.functional.log_softmax(x, dim=1)
return log_probs
model = Simple1DCNN()
print(model(torch.tensor(X)).size)
引发以下错误:
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-5-eca5856a2314> in <module>()
21
22 model = Simple1DCNN()
---> 23 print(model(torch.tensor(X)).size)
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
487 result = self._slow_forward(*input, **kwargs)
488 else:
--> 489 result = self.forward(*input, **kwargs)
490 for hook in self._forward_hooks.values():
491 hook_result = hook(self, input, result)
<ipython-input-5-eca5856a2314> in forward(self, x)
12 self.layer2 = torch.nn.Conv1d(in_channels=20, out_channels=10, kernel_size=1)
13 def forward(self, x):
---> 14 x = self.layer1(x)
15 x = self.act1(x)
16 x = self.layer2(x)
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
487 result = self._slow_forward(*input, **kwargs)
488 else:
--> 489 result = self.forward(*input, **kwargs)
490 for hook in self._forward_hooks.values():
491 hook_result = hook(self, input, result)
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/modules/conv.py in forward(self, input)
185 def forward(self, input):
186 return F.conv1d(input, self.weight, self.bias, self.stride,
--> 187 self.padding, self.dilation, self.groups)
188
189
RuntimeError: Expected 3-dimensional input for 3-dimensional weight [20, 7, 5], but got 2-dimensional input of size [10, 7] instead
编辑:请参见以下内容,由Shai推动。
import numpy
import torch
X = numpy.random.uniform(-10, 10, 70).reshape(1, 7, -1)
# Y = np.random.randint(0, 9, 10).reshape(1, 1, -1)
class Simple1DCNN(torch.nn.Module):
def __init__(self):
super(Simple1DCNN, self).__init__()
self.layer1 = torch.nn.Conv1d(in_channels=7, out_channels=20, kernel_size=5, stride=2)
self.act1 = torch.nn.ReLU()
self.layer2 = torch.nn.Conv1d(in_channels=20, out_channels=10, kernel_size=1)
def forward(self, x):
x = self.layer1(x)
x = self.act1(x)
x = self.layer2(x)
log_probs = torch.nn.functional.log_softmax(x, dim=1)
return log_probs
model = Simple1DCNN().double()
print(model(torch.tensor(X)).shape)
答案 0 :(得分:1)
您忘记了“最小批量尺寸”,每个“ 1D”样本确实具有两个尺寸:通道数(在您的示例中为7)和长度(在您的情况下为10)。但是,pytorch希望输入的不是单个样本,而是沿“最小批量维度”堆叠在一起的B
个样本的最小批量。
因此,pytorch中的“ 1D” CNN希望输入一个3D张量:B
x C
x T
。如果只有一个信号,则可以添加单例尺寸:
out = model(torch.tensor(X)[None, ...])