if torch.cuda.is_available():
for epoch in range(epoch_num):
for i,(images, labels) in enumerate(trainloader):
images=images.to(device)
labels=labels.to(device)
optimizer.zero_grad()
#Forward Backward Optimize
outputs = net(images)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
# print statistics
if i%1000==0:
print('Number of epochs: %d, Mini Batch order: %d' %(epoch+1,i))
这是火车部分。我使用四个GPU进行训练。在我运行5000个纪元(批量大小为128)后,精度达到“10%”!太低了! 以下是测试部分:
with torch.no_grad():
num_correct = 0
total_data = 0
if torch.cuda.is_available():
for images, labels in testloader:
images=images.to(device)
labels=labels.to(device)
output = net(images)
_, expected = torch.max(output.data, 1)
total_data += labels.size(0)
num_correct += (expected == labels).sum().item()
我不知道出了什么问题,我该如何调查呢?