火炬|我不知道为什么会引发错误? (初学者)

时间:2020-05-04 07:57:52

标签: pytorch mnist cnn

import torch.nn as nn
import torch.nn.functional as F

## TODO: Define the NN architecture
class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        # linear layer (784 -> 1 hidden node)
        self.fc1 = nn.Linear(28 * 28, 512)
        self.fc2 = nn.Linear(512 * 512)
        self.fc3 = nn.Linear(512 * 10)

    def forward(self, x):
        # flatten image input
        x = x.view(-1, 28 * 28)
        # add hidden layer, with relu activation function
        x = F.relu(self.fc1(x))
        x = F.relu(self.fc2(x))
        x = F.relu(self.fc3(x))
        return x

# initialize the NN
model = Net()
print(model)

运行此命令时,将引发此错误。为什么?

TypeError:__ init __()缺少1个必需的位置参数:“ out_features”

1 个答案:

答案 0 :(得分:3)

此错误是因为您尚未在fc2和fc3中提供完全连接层的输出大小。 下面是修改后的代码。我添加了输出大小,我不确定这是否是您想要的输出大小体系结构。但是为了演示,我将输出大小设置为。请根据需要编辑代码并添加输出大小。

请记住,上一个完全连接的层的输出大小应该是下一个FC层的输入大小。否则会引发大小不匹配错误。

import torch.nn as nn
import torch.nn.functional as F

## TODO: Define the NN architecture
class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        # linear layer (784 -> 1 hidden node)
        self.fc1 = nn.Linear(28 * 28, 512)
        self.fc2 = nn.Linear(512 ,512*10)
        self.fc3 = nn.Linear(512 * 10,10)

    def forward(self, x):
        # flatten image input
        x = x.view(-1, 28 * 28)
        # add hidden layer, with relu activation function
        x = F.relu(self.fc1(x))
        x = F.relu(self.fc2(x))
        x = F.relu(self.fc3(x))
        return x

# initialize the NN
model = Net()
print(model)