已经在3个月前运行了代码并获得了预期的结果。什么都没改变。使用(多个)早期版本中的代码(包括最早的版本(肯定有效))尝试进行故障排除。问题仍然存在。
# 4 - Constructing the undercomplete architecture
class autoenc(nn.Module):
def __init__(self, nodes = 100):
super(autoenc, self).__init__() # inheritence
self.full_connection0 = nn.Linear(784, nodes) # encoding weights
self.full_connection1 = nn.Linear(nodes, 784) # decoding weights
self.activation = nn.Sigmoid()
def forward(self, x):
x = self.activation(self.full_connection0(x)) # input encoding
x = self.full_connection1(x) # output decoding
return x
# 5 - Initializing autoencoder, squared L2 norm, and optimization algorithm
model = autoenc().cuda()
criterion = nn.MSELoss()
optimizer = optim.Adam(model.parameters(),
lr = 1e-3, weight_decay = 1/2)
# 6 - Training the undercomplete autoencoder model
num_epochs = 500
batch_size = 32
length = int(len(trn_data) / batch_size)
loss_epoch1 = []
for epoch in range(num_epochs):
train_loss = 0
score = 0.
for num_data in range(length - 2):
batch_ind = (batch_size * num_data)
input = Variable(trn_data[batch_ind : batch_ind + batch_size]).cuda()
# === forward propagation ===
output = model(input)
loss = criterion(output, trn_data[batch_ind : batch_ind + batch_size])
# === backward propagation ===
loss.backward()
# === calculating epoch loss ===
train_loss += np.sqrt(loss.item())
score += 1. #<- add for average loss error instead of total
optimizer.step()
loss_calculated = train_loss/score
print('epoch: ' + str(epoch + 1) + ' loss: ' + str(loss_calculated))
loss_epoch1.append(loss_calculated)
现在绘制损失时,它会剧烈振荡(在lr = 1e-3时)。而3个月前,它正在稳定收敛(在lr = 1e-3时)。
由于最近创建的帐户,无法上传图片。
尽管这是我将学习率降低到1e-5的时候。在1e-3时,到处都是。
How it should look like, and used to look like at lr = 1e-3.
答案 0 :(得分:0)
由于梯度会累积,因此在执行optimizer.zero_grad()
之前应先执行loss.backward()
。这很可能是导致问题的原因。
培训阶段要遵循的一般顺序:
optimizer.zero_grad()
output = model(input)
loss = criterion(output, label)
loss.backward()
optimizer.step()
此外,所使用的重量衰减值(1/2)引起了问题。