将批次传递到PyTorch模型

时间:2020-09-28 11:52:29

标签: python pytorch lstm

我正在尝试训练PyTorch LSTM模型,该模型定义为:

class RecurrentNet(torch.nn.Module):
    def __init__(self, d_in, d_hidden, sequence_length):
        super(RecurrentNet, self).__init__()
        self.d_in = d_in
        self.d_hidden = d_hidden
        self.sequence_length = sequence_length
        self.lstm = torch.nn.LSTM(input_size=d_in,  hidden_size=d_hidden, num_layers=sequence_length)
        self.fc = nn.Linear(d_hidden, 1)
        
    def forward(self, x):
        out, hidden =  self.lstm(x)
        y_pred = F.relu(self.fc(out[-1][-1]))
        return y_pred

model = RecurrentNet(28, 3, l)

我已经以len 32的批量形式准备了数据,每个实例的尺寸为6 x 28,即torch.Size([6, 28])。这意味着总输入张量为大小torch.Size([32, 6, 28]),标签大小为torch.Size([32, 1])

当我将单个实例传递给未经训练的模型时,它将返回一个预期的整数。当我通过32个实例的张量时,它返回:

tensor([0.], grad_fn=<ReluBackward0>)

我希望此输入具有32 x 1的输出张量。准备训练数据时我错过了什么吗?

0 个答案:

没有答案