在激活函数中出现错误“TypeError: ‘Tensor’ object is not callable”

时间:2021-03-26 07:02:22

标签: python pytorch lstm

我正在尝试在 pytorch LSTM 中使用 relu 激活函数,但在“张量对象不可调用。任何指南或帮助?我可以在前向传播中使用不同的激活函数吗?但我使用相同的激活函数时出现错误隐藏层和前向传播中的函数。您的评论将更有帮助。

class LSTM(nn.Module):
  def __init__(self, input_size=1, hidden_layer_size=20, output_size=1):
    super().__init__()

    self.hidden_layer_size = hidden_layer_size
    self.lstm = nn.LSTM(input_size, hidden_layer_size)
    self.relu = nn.functional.relu(torch.FloatTensor(hidden_layer_size), torch.FloatTensor(output_size))     
    self.hidden_cell = (torch.zeros(1, 1, self.hidden_layer_size),    
                        torch.zeros(1, 1, self.hidden_layer_size))
     def forward(self, input_seq):
    lstm_out, self.hidden_cell = self.lstm(input_seq.view(len(input_seq), 1, -1), self.hidden_cell)
    predictions = self.relu(lstm_out.view(len(input_seq), -1))
    return predictions[-1]

    model = LSTM()
    loss_function = nn.MSELoss()
    optimizer = torch.optim.Adam(model.parameters(), lr = 0.0001)

    epochs = 150

     for i in range(epochs):
       for seq, labels in train_inout_seq:
        optimizer.zero_grad()
        model.hidden_cell = (torch.zeros(1, 1, model.hidden_layer_size),
                            torch.zeros(1, 1, model.hidden_layer_size))    
       y_pred = model(seq)

    single_loss = loss_function(y_pred, labels)
    single_loss.backward()
    optimizer.step()

     if i%25 == 1:
    print(f'epoch: {i:3} loss: {single_loss.item():10.8f}')

    print(f'epoch: {i:3} loss: {single_loss.item():10.10f}')

之后我收到一个错误:`

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-108-5fcfb471ed9a> in <module>
      6     model.hidden_cell = (torch.zeros(1, 1, model.hidden_layer_size),
      7                          torch.zeros(1, 1, model.hidden_layer_size))    
----> 8     y_pred = model(seq)
      9 
     10     single_loss = loss_function(y_pred, labels)

/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
    548             result = self._slow_forward(*input, **kwargs)
    549         else:
--> 550             result = self.forward(*input, **kwargs)
    551         for hook in self._forward_hooks.values():
    552             hook_result = hook(self, input, result)

<ipython-input-105-221892d3f487> in forward(self, input_seq)
     12   def forward(self, input_seq):
     13     lstm_out, self.hidden_cell = self.lstm(input_seq.view(len(input_seq), 1, -1), self.hidden_cell)
---> 14     predictions = self.relu(lstm_out.view(len(input_seq), -1))
     15     return predictions[-1]

TypeError: 'Tensor' object is not callable

1 个答案:

答案 0 :(得分:0)

self.relu = nn.functional.relu(torch.FloatTensor(hidden_layer_size), torch.FloatTensor(output_size))     

这一行并没有真正定义 ReLU 函数以供进一步使用,而是将 ReLU 函数应用到任意张量(即 {{1} }) 并返回结果张量!所以 torch.FloatTensor(hidden_layer_size) 不是函数而是张量,因此是错误。

一种补救方法是使用以下内容代替上面的行:

self.relu

这提供了一个 self.relu = nn.ReLU() 实例,您可以将其用作 nn.ReLU 函数,就像在 ReLU 方法中调用它一样。

另一种解决方案是根本不定义 forward,并在 self.relu 中使用这一行:

forward

这是模块化方法和功能方法之间的区别。

出于某种原因,请参阅此处了解为什么偏爱一个而不是另一个:https://discuss.pytorch.org/t/whats-the-difference-between-nn-relu-vs-f-relu/27599