以下代码是使用损失|| output-input || ^ 2来训练图像尺寸为64 * 64的MLP。
由于某种原因,我的每个时期的权重并未如最后所示进行更新。
class MLP(nn.Module):
def __init__(self, size_list):
super(MLP, self).__init__()
layers = []
self.size_list = size_list
for i in range(len(size_list) - 2):
layers.append(nn.Linear(size_list[i],size_list[i+1]))
layers.append(nn.ReLU())
layers.append(nn.Linear(size_list[-2], size_list[-1]))
self.net = nn.Sequential(*layers)
def forward(self, x):
return self.net(x)
model_1 = MLP([4096, 64, 4096])
对于训练每个时代:
def train_epoch(model, train_loader, criterion, optimizer):
model.train()
model.to(device)
running_loss = 0.0
start_time = time.time()
# train batch
for batch_idx, (data) in enumerate(train_loader):
optimizer.zero_grad()
data = data.to(device)
outputs = model(data)
loss = criterion(outputs, data)
running_loss += loss.item()
loss.backward()
optimizer.step()
end_time = time.time()
weight_ll = model.net[0].weight
running_loss /= len(train_loader)
print('Training Loss: ', running_loss, 'Time: ',end_time - start_time, 's')
return running_loss, outputs, weight_ll
用于训练数据:
n_epochs = 20
Train_loss = []
weights=[]
criterion = nn.MSELoss()
optimizer = optim.SGD(model_1.parameters(), lr = 0.1)
for i in range(n_epochs):
train_loss, output, weights_ll = train_epoch(model_1, trainloader, criterion, optimizer)
Train_loss.append(train_loss)
weights.append(weights_ll)
print('='*20)
现在,当我按每个时代打印第一个完全连接的图层的权重时,它们的权重不会被更新。
print(weights[0][0])
print(weights[19][0])
上面的输出是(显示第0阶段和第19阶段的权重):
tensor([ 0.0086, 0.0069, -0.0048, ..., -0.0082, -0.0115, -0.0133],
grad_fn=<SelectBackward>)
tensor([ 0.0086, 0.0069, -0.0048, ..., -0.0082, -0.0115, -0.0133],
grad_fn=<SelectBackward>)
可能出了什么问题?看着我的损失,它以稳定的速度下降,但是权重没有变化。
答案 0 :(得分:0)
尝试weight_ll = model.net[0].weight.clone().detach()
或仅在weight_ll = model.net[0].weight.clone()
函数中更改train_epoch()
。您会发现权重有所不同。
说明:weights_ll
始终是最后一个纪元值(如果不克隆它)。在图中它将被视为相同的张量。这就是为什么您的weights[0][0]
等于weights[19][0]
的原因,它们实际上是相同的张量。