DataLoader问题:RNN模块权重不是内存的单个连续块的一部分

时间:2018-11-28 09:06:46

标签: python lstm pytorch rnn

我正在尝试为时间序列数据构建LSTM模型。数据集的详细信息:
输入数据是具有800个主题的时间序列,每个主题都有60行200列的2D数组数据。我将整个数据加载为形状为[800,60,200]的张量,分类问题的标签为形状为[800,1]。 我使用以下代码制作了数据字典:

class DataCurate(Dataset):
    def __init__(self, l1,l2, transform=None):
        self.l1=l1
        self.l2=l2
        self.transform=transform
    def __len__(self):
        return len(self.l1)
    def __getitem__(self, index):
        array=self.l1[index]
        label=self.l2[index]
        sample = {'time_data': array, 'labels': label}
        return sample

数据和标签位于变量x和y中。我叫data = Datacurate(x,y)

稍后,我使用代码构建和LSTM模型来解决分类问题:

class RNNModel(nn.Module):
    def __init__(self, input_size, hidden_size, num_layers, num_classes):
    super(RNNModel, self).__init__()
    self.hidden_size = hidden_size
    self.num_layers = num_layers
    self.lstm = nn.LSTM(input_size, hidden_size, num_layers,batch_first=True) 
    self.linear = nn.Linear(hidden_size, num_layers, bias=True) 
    self.softmax = nn.LogSoftmax()

    def forward(self, x)
        self.lstm.flatten_parameters()
        out_packed, state = self.lstm(x)  # RNN
        print("lstm output size: {out.size()}"+str(out_packed.size()))
        out = self.linear(out_packed[-1])  # linear transform
    print("linear output size {out.size()} "+str(out.size()))
    log_probs = F.log_softmax(out,dim=1)
    print("softmax output size {log_probs.size()}"+str(log_probs.size()))
        return log_probs

当我运行训练脚本时,这给了我一个错误:

UserWarning: RNN module weights are not part of single contiguous chunk of memory. This means they need to be compacted at every call, possibly greatly increasing memory usage. To compact weights again call flatten_parameters().
  out_packed, state = self.lstm(x)  # RNN
Traceback (most recent call last):
  File "main_2.py", line 100, in <module>
    output = model(train_inputs.transpose(0,1))
  File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py", line 477, in __call__
    result = self.forward(*input, **kwargs)
  File "/media/iab/disk_a/meghal/test/quickdraw_tutorial_dataset_v1/pytorch_RNN_examples/model.py", line 26, in forward
    out_packed, state = self.lstm(x)  # RNN
  File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py", line 477, in __call__
    result = self.forward(*input, **kwargs)
  File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/rnn.py", line 192, in forward
    output, hidden = func(input, self.all_weights, hx, batch_sizes)
  File "/usr/local/lib/python3.5/dist-packages/torch/nn/_functions/rnn.py", line 324, in forward
    return func(input, *fargs, **fkwargs)
  File "/usr/local/lib/python3.5/dist-packages/torch/nn/_functions/rnn.py", line 288, in forward
    dropout_ts)
RuntimeError: param_from.type() == param_to.type() ASSERT FAILED at /pytorch/aten/src/ATen/native/cudnn/RNN.cpp:491, please report a bug to PyTorch. parameter types mismatch

我不知道这意味着什么以及如何解决。我对LSTM完全陌生。

0 个答案:

没有答案