如何使用pytorch构建多维自动编码器

时间:2019-06-03 04:15:45

标签: pytorch

对于序列自动编码器,我遵循了这个很好的答案

LSTM autoencoder always returns the average of the input sequence

但是当我尝试更改代码时遇到了一些问题:

  1. 问题一: 您的解释如此专业,但问题与我的有所不同,我附上了您示例中更改的代码。我的输入要素是二维的,我的输出与输入相同。 例如:
input_x = torch.Tensor([[0.0,0.0], [0.1,0.1], [0.2,0.2], [0.3,0.3], [0.4,0.4]])
output_y = torch.Tensor([[0.0,0.0], [0.1,0.1], [0.2,0.2], [0.3,0.3], [0.4,0.4]])
the input_x and output_y are same, 5-timesteps, 2-dimensional feature.

        import torch
        import torch.nn as nn
        import torch.optim as optim

        class LSTM(nn.Module):
            def __init__(self, input_dim, latent_dim, num_layers):
                super(LSTM, self).__init__()
               self.input_dim = input_dim
                self.latent_dim = latent_dim
                self.num_layers = num_layers
                self.encoder = nn.LSTM(self.input_dim, self.latent_dim, self.num_layers)

                # I changed here, to 40 dimesion, I think there is some problem 
                # self.decoder = nn.LSTM(self.latent_dim, self.input_dim, self.num_layers)
                self.decoder = nn.LSTM(40, self.input_dim, self.num_layers)

            def forward(self, input):
                # Encode
                _, (last_hidden, _) = self.encoder(input)
                # It is way more general that way
                encoded = last_hidden.repeat(input.shape)
                # Decode
                y, _ = self.decoder(encoded)
               return torch.squeeze(y)

        model = LSTM(input_dim=2, latent_dim=20, num_layers=1)
        loss_function = nn.MSELoss()
        optimizer = optim.Adam(model.parameters())
        y = torch.Tensor([[0.0,0.0], [0.1,0.1], [0.2,0.2], [0.3,0.3], [0.4,0.4]])
        x = y.view(len(y), -1, 2)   # I changed here 

        while True:
            y_pred = model(x)
            optimizer.zero_grad()
            loss = loss_function(y_pred, y)
            loss.backward()
            optimizer.step()
            print(y_pred)

上面的代码可以很好地学习,可以帮助您回顾一下代码并给出一些说明。

当我输入2个示例作为模型的输入时,模型将无法工作:

例如,更改代码:

y = torch.Tensor([[0.0,0.0], [0.1,0.1], [0.2,0.2], [0.3,0.3], [0.4,0.4]])

收件人:

y = torch.Tensor([[[0.0,0.0],[0.5,0.5]], [[0.1,0.1], [0.6,0.6]], [[0.2,0.2],[0.7,0.7]], [[0.3,0.3],[0.8,0.8]], [[0.4,0.4],[0.9,0.9]]])

当我计算损失函数时,它会抱怨一些错误?谁能帮忙看看

  1. 问题二: 我的训练样本长度不同: 例如:
x1 = [[0.0,0.0], [0.1,0.1], [0.2,0.2], [0.3,0.3], [0.4,0.4]]   #with 5 timesteps
x2 = [[0.5,0.5], [0.6,0.6], [0.7,0.7]] #with only 3 timesteps

如何将这两个训练样本同时输入到模型中进行批量训练。

1 个答案:

答案 0 :(得分:0)

递归N维自动编码器

首先,LSTM适用于1D个样本,您的样本为2D,因为它通常用于用单个向量编码的单词。

不过不用担心,您可以将此2D样本展平为1D,例如您的案例:

import torch

var = torch.randn(10, 32, 100, 100)
var.reshape((10, 32, -1))  # shape: [10, 32, 100 * 100]

请注意,这实际上并不通用,如果要输入3D怎么办?以下代码段将这一概念概括为样本的任何维度,前提是上述维度为batch_sizeseq_len

import torch

input_size = 2

var = torch.randn(10, 32, 100, 100, 35)
var.reshape(var.shape[:-input_size] + (-1,)) # shape: [10, 32, 100 * 100 * 35]

最后,您可以在神经网络内部使用它,如下所示。特别看一下forward方法和构造函数参数:

import torch


class LSTM(nn.Module):
    # input_dim has to be size after flattening
    # For 20x20 single input it would be 400
    def __init__(
        self,
        input_dimensionality: int,
        input_dim: int,
        latent_dim: int,
        num_layers: int,
    ):
        super(LSTM, self).__init__()
        self.input_dimensionality: int = input_dimensionality
        self.input_dim: int = input_dim  # It is 1d, remember
        self.latent_dim: int = latent_dim
        self.num_layers: int = num_layers
        self.encoder = torch.nn.LSTM(self.input_dim, self.latent_dim, self.num_layers)
        # You can have any latent dim you want, just output has to be exact same size as input
        # In this case, only encoder and decoder, it has to be input_dim though
        self.decoder = torch.nn.LSTM(self.latent_dim, self.input_dim, self.num_layers)

    def forward(self, input):
        # Save original size first:
        original_shape = input.shape
        # Flatten 2d (or 3d or however many you specified in constructor)
        input = input.reshape(input.shape[: -self.input_dimensionality] + (-1,))

        # Rest goes as in my previous answer
        _, (last_hidden, _) = self.encoder(input)
        encoded = last_hidden.repeat(input.shape)
        y, _ = self.decoder(encoded)

        # You have to reshape output to what the original was
        reshaped_y = y.reshape(original_shape)
        return torch.squeeze(reshaped_y)

请记住,在这种情况下,您必须reshape输出。它适用于任何尺寸。

分批

当涉及批处理和不同长度的序列时,要复杂一些。

在通过网络推送每个序列之前,必须分批填充每个序列。通常,填充的值为零,不过您可以在LSTM中配置它。

您可以查看this link作为示例。您必须使用torch.nn.pack_padded_sequence等功能才能使其正常工作,您可以选中this answer

哦,自PyTorch 1.1起,您不必按顺序对序列进行排序即可打包。但是,涉及到本主题时,请阅读一些教程,应该使事情变得更清楚。

最后:请分开您的问题。如果您使用单个示例执行自动编码,请继续进行批处理,如果在那里遇到问题,请在StackOverflow上发布一个新问题,谢谢。