我正在尝试创建一个神经网络以输入形状为249561, 80, 1
的输入,并且y标签为(249561, 2)
。
def __init__(self):
super(Net1, self).__init__()
self.conv1 = nn.Conv1d(80, 16, kernel_size=1)
self.conv2_drop = nn.Dropout()
self.fc1 = nn.Linear(1,256)
self.fc2 = nn.Linear(256, 64)
self.fc3 = nn.Linear(64,32)
self.fc4 = nn.Linear(32,2)
def forward(self, x):
print(type(x))
x = F.relu(F.max_pool1d(self.conv1(x), 1))
print(x.shape)
x.reshape(-1)
e1 = F.relu(self.fc1(x))
x = F.dropout(e1, training=self.training)
x = F.relu(self.fc2(x))
x = F.dropout(x, training=self.training)
x = F.relu(self.fc3(x))
x = F.dropout(x, training=self.training)
x = self.fc4(x)
return x
我的训练循环看起来像这样
losses = [];
batch_size = 16
for epoch in range(10):
permutation = torch.randperm(x2.size()[0])
for i in range(0,len(x2), batch_size):
indices = permutation[i:i+batch_size]
batch_x, batch_y = x2[indices], onehot_encoded[indices]
#images = Variable(images.float())
#labels = Variable(labels)
# Forward + Backward + Optimize
optimizer.zero_grad()
outputs = model(batch_x)
loss = criterion(outputs, batch_y)
loss.backward()
optimizer.step()
我有一批16,并且输入形状为[16, 80, 1].
的张量时,出现以下错误。 RuntimeError: Expected object of scalar type Long but got scalar type Float for argument #2 'target' in call to _thnn_nll_loss2d_forward
。我怀疑这是输出层的问题,但它返回形状2的张量,该张量与我的标签相同。输出x大小torch.Size([16, 16, 2])
答案 0 :(得分:0)
为什么不更改输入,为什么不使用nn.Conv1d
(同时替换nn.Conv2d
-您也需要更改辍学)?
如果您真的想更改输入,可以添加:
batch_x = batch_x[..., None]
之后
batch_x, batch_y = x2[indices], onehot_encoded[indices]