我正在PyTorch中尝试seq2seq_tutorial。编码器的lstm隐藏状态大小似乎出现尺寸错误。
对于bidirectional=True
和num_layers = 2
,隐藏状态的形状应该为(num_layers*2, batch_size, hidden_size)
。
但是,以下消息出现错误:
RuntimeError: Expected hidden[0] size (4, 1, 256), got (1, 256)
首先,我尝试重塑隐藏状态以初始化具有其他形状的隐藏状态,但是似乎没有任何效果。
这是我的代码的训练方法:
def train(self, input, target, encoder, decoder, encoder_optim, decoder_optim, criterion):
enc_optimizer = encoder_optim
dec_optimizer = decoder_optim
enc_optimizer.zero_grad()
dec_optimizer.zero_grad()
pair = (input, target)
input_len = input.size(0)
target_len = target.size(0)
enc_output_tensor = torch.zeros(self.opt['max_seq_len'], encoder.hidden_size, device=device)
enc_hidden = encoder.cuda().initHidden(device)
for word_idx in range(input_len):
print('Input:', input[word_idx], '\nHidden shape:', enc_hidden.size())
enc_output, enc_hidden = encoder(input[word_idx], enc_hidden)
enc_output_tensor[word_idx] = enc_output[0,0]
这是我的代码的编码器方法:
class EncoderBRNN(nn.Module):
# A bidirectional rnn based encoder
def __init__(self, input_size, hidden_size, emb_size, batch_size=1, num_layers=2, bidir=True):
super(EncoderBRNN, self).__init__()
self.input_size = input_size
self.hidden_size = hidden_size
self.batch_size = batch_size
self.embedding_dim = emb_size
self.num_layers = num_layers
self.bidir = bidir
self.embedding_layer = nn.Embedding(self.input_size, self.embedding_dim)
self.enc_layer = nn.LSTM(self.embedding_dim, self.hidden_size, num_layers=self.num_layers, bidirectional=self.bidir)
def forward(self, input, hidden):
embed = self.embedding_layer(input).view(1, 1, -1)
output, hidden = self.enc_layer(embed, hidden)
return output, hidden
def initHidden(self, device):
if self.bidir:
num_stacks = self.num_layers * 2
else:
num_stacks = self.num_layers
return torch.zeros(num_stacks, self.batch_size, self.hidden_size, device=device)
答案 0 :(得分:0)
我知道这个问题是前一段时间提出的,但是我认为我在this torch discussion中找到了答案。相关信息:
LSTM包含一个隐藏状态元组:self.rnn(x,(h_0,c_0))看起来您还没有处于第二个隐藏状态?
您也可以在LSTM
的文档中看到此内容