Pytorch教程LSTM

时间:2018-02-09 11:49:16

标签: neural-network nlp deep-learning lstm pytorch

我试图用Pytorch实现关于序列模型和长短期内存网络的the exercise。这个想法是添加一个LSTM词性标记器字符级功能,但我似乎无法解决它。他们提示应该有两个LSTM,一个将输出一个字符级别的表示,另一个将负责预测词性标签。我只是无法弄清楚如何循环单词级别(在句子中)和字符(在句子的每个单词中)并在前向函数中实现它。有谁知道怎么做?还是遇到类似的情况?

这是我的代码:

class LSTMTaggerAug(nn.Module):
def __init__(self, embedding_dim_words, embedding_dim_chars, hidden_dim_words, hidden_dim_chars, vocab_size, tagset_size, charset_size):
    super(LSTMTaggerAug, self).__init__()
    self.hidden_dim_words = hidden_dim_words
    self.hidden_dim_chars = hidden_dim_chars
    self.word_embeddings = nn.Embedding(vocab_size, embedding_dim_words)
    self.char_embeddings = nn.Embedding(charset_size, embedding_dim_chars)
    self.lstm_char = nn.LSTM(embedding_dim_chars, hidden_dim_chars)
    self.lstm_words = nn.LSTM(embedding_dim_words + hidden_dim_chars, hidden_dim_words)
    self.hidden2tag = nn.Linear(hidden_dim_words, tagset_size)
    self.hidden_char = self.init_hidden(c=False)
    self.hidden_words = self.init_hidden(c=True)

def init_hidden(self, c=True):
    if c:
        return (autograd.Variable(torch.zeros(1, 1, self.hidden_dim_words)),
                autograd.Variable(torch.zeros(1, 1, self.hidden_dim_words)))
    else:
        return (autograd.Variable(torch.zeros(1, 1, self.hidden_dim_chars)),
                autograd.Variable(torch.zeros(1, 1, self.hidden_dim_chars)))


def forward(self, sentence, words):
    # embeds = self.word_embeddings(sentence)
    for ix, word in enumerate(sentence):
        chars = words[ix]
        char_embeds = self.char_embeddings(chars)
        lstm_char_out, self.hidden_char = self.lstm_char(
            char_embeds.view(len(chars), 1, -1), self.hidden_char)
        char_rep = lstm_char_out[-1]
        embeds = self.word_embeddings(word)
        embeds_cat = torch.cat((embeds, char_rep), dim=1)
        lstm_out, self.hidden_words = self.lstm_words(embeds_cat, self.hidden_words)
        tag_space = self.hidden2tag(lstm_out.view(1, -1))
        tag_score = F.log_softmax(tag_space, dim=1)
        if ix==0:
            tag_scores = tag_score
        else:
            tag_scores = torch.cat((tag_scores, tag_score), 0)

    return tag_scores

1 个答案:

答案 0 :(得分:0)

根据你的描述,最天真的做法是将句子s剥去标点符号。然后把它分成单词:

words = s.split()

取你的第一个字符级别lstm,LSTMc并将其单独应用于每个单词以对单词进行编码(使用lstm的最后输出状态对单词进行编码):

encoded_words = []
for word in words:
    state = state_0

    for char in word:
        h, state = LSTMc(one_hot_encoding(char), state)
    encoded_words.append(h)

在对单词进行编码后,您会在编码后的单词上将词性标注器lstm LSTMw传递给单词级别:

state = statew_0
parts_of_speech = []
for enc_word in encoded_words:
    pos, state = LSTMw(enc_word, state)
    parts_of_speech.append(pos)