我正在训练自动编码器以解决多类分类问题,在该问题中,我传输16个等概率的消息,并通过降噪自动编码器发送它们以接收它们。我正在尝试实现本文的结果(图3b的修改),具体来说:请参考https://arxiv.org/pdf/1702.00832.pdf中的图2。
这是我的自动编码器类:
class FullyConnectedAutoencoder(nn.Module):
def __init__(self, k, n_channel, EbN0_dB):
self.k = k
self.n_channel = n_channel
self.EbN0_dB = EbN0_dB
super(FullyConnectedAutoencoder, self).__init__()
self.transmitter = nn.Sequential(
nn.Linear(in_features=2 ** k, out_features=2 ** k, bias=True),
nn.ReLU(inplace=True),
nn.Linear(in_features=2 ** k, out_features=n_channel, bias=True) )
self.receiver = nn.Sequential(
nn.Linear(in_features=n_channel, out_features=2 ** k, bias=True),
nn.ReLU(inplace=True),
nn.Linear(in_features=2 ** k, out_features=2 ** k, bias=True),)
def forward(self, x):
x = self.transmitter(x)
# Normalization
n = (x.norm(dim=-1)[:,None].view(-1,1).expand_as(x))
x = sqrt(7)*(x / n)
training_SNR = 10 ** (self.EbN0_dB / 10) # Train at 3 dB
R = k / n_channel
noise = torch.randn(x.size()) / ((2*R*training_SNR) ** 0.5)
x += noise
x = self.receiver(x)
return x
我的训练循环如下:
# TRAINING
for epoch in range(epochs):
for step, (x, y) in enumerate(trainloader): # gives batch data, normalize x when iterate train_loader
# Forward pass
output = net(x) # output
y = (y.long()).view(-1)
loss = loss_func(output, y) # cross entropy loss
# Backward and optimize
optimizer.zero_grad() # clear gradients for this training step
loss.backward() # backpropagation, compute gradients
optimizer.step() # apply gradients
if step % 100 == 0:
train_output = net(train_data)
pred_labels = torch.max(train_output, 1)[1].data.squeeze()
accuracy = sum(pred_labels == train_labels) / float(train_labels.size(0))
print('Epoch: ', epoch, '| train loss: %.4f' % loss.item(), '| train accuracy: %.4f' % accuracy)
训练循环效果很好。但是,我想在不同的信噪比下测试我的方法。我在执行此操作时遇到了一些问题。这是我正在尝试的两种方法
方法1:每次测试自动编码器时都要声明一个新对象
for p in range(len(EbNo_test)):
with torch.no_grad():
for test_data, test_labels in testloader:
net = FullyConnectedAutoencoder(k, n_channel, EbNo_test[p])
decoded_signal = net(test_data)
# encoded_signal = net.transmitter(test_data)
# noisy_signal = encoded_signal + test_noise
# decoded_signal = net.receiver(noisy_signal)
pred_labels = torch.max(decoded_signal, 1)[1].data.squeeze()
test_BLER[p] = sum(pred_labels != test_labels) / float(test_labels.size(0))
print('Eb/N0:',EbNo_test[p].numpy(), '| test BLER: %.4f' % test_BLER[p])
方法2:这更直观。分别使用发送器和接收器部分,并在发送信号后增加噪声。
for p in range(len(EbNo_test)):
EcNo_test_sqrt[p] = 1/(2*R*(10**(EbNo_test[p]/20)))
test_noise = EcNo_test_sqrt[p] * torch.randn(batch_size, n_channel)
with torch.no_grad():
for test_data, test_labels in testloader:
encoded_signal = net.transmitter(test_data)
noisy_signal = encoded_signal + test_noise
decoded_signal = net.receiver(noisy_signal)
pred_labels = torch.max(decoded_signal, 1)[1].data.squeeze()
test_BLER[p] = sum(pred_labels != test_labels) / float(test_labels.size(0))
print('Eb/N0:',EbNo_test[p].numpy(), '| test BLER: %.4f' % test_BLER[p])
奇怪的是,我得到的答案是错误的-这意味着错误应遵循这样的趋势在90%处。 (以上引用的论文中的图3b)
我做错什么了吗?非常感谢您的帮助。