如何构建包含双字母组的gensim词典?

时间:2018-07-19 15:07:56

标签: python nlp gensim

我正在尝试建立一个Tf-Idf模型,该模型可以使用gensim对二元组和单字组进行评分。为此,我建立了gensim词典,然后使用该词典创建用于构建模型的语料库的词袋表示。

构建字典的步骤如下:

dict = gensim.corpora.Dictionary(tokens)

其中token是这样的字母组合和双字母组合的列表:

[('restore',),
 ('diversification',),
 ('made',),
 ('transport',),
 ('The',),
 ('grass',),
 ('But',),
 ('distinguished', 'newspaper'),
 ('came', 'well'),
 ('produced',),
 ('car',),
 ('decided',),
 ('sudden', 'movement'),
 ('looking', 'glasses'),
 ('shapes', 'replaced'),
 ('beauties',),
 ('put',),
 ('college', 'days'),
 ('January',),
 ('sometimes', 'gives')]

但是,当我向gensim.corpora.Dictionary()提供诸如此类的列表时,该算法会将所有令牌减少为双字母组,例如:

test = gensim.corpora.Dictionary([(('happy', 'dog'))])
[test[id] for id in test]
=> ['dog', 'happy']

是否可以使用gensim生成包含双字母组的字典?

2 个答案:

答案 0 :(得分:2)

from gensim.models import Phrases
from gensim.models.phrases import Phraser
from gensim import models



docs = ['new york is is united states', 'new york is most populated city in the world','i love to stay in new york']

token_ = [doc.split(" ") for doc in docs]
bigram = Phrases(token_, min_count=1, threshold=2,delimiter=b' ')


bigram_phraser = Phraser(bigram)

bigram_token = []
for sent in token_:
    bigram_token.append(bigram_phraser[sent])

输出将为:[['new york', 'is', 'is', 'united', 'states'],['new york', 'is', 'most', 'populated', 'city', 'in', 'the', 'world'],['i', 'love', 'to', 'stay', 'in', 'new york']]

#now you can make dictionary of bigram token 
dict = gensim.corpora.Dictionary(bigram_token)

print(dict.token2id)
#Convert the word into vector, and now you can use tfidf model from gensim 
corpus = [dict.doc2bow(text) for text in bigram_token]

tfidf_model = models.TfidfModel(corpus)

答案 1 :(得分:0)

在创建字典之前,您必须“动词”语料库以检测二元词。

我建议您在喂字典之前也先对其进行词干或词根化处理,这是一个具有nltk词干功能的示例:

import re
from gensim.models.phrases import Phrases, Phraser
from gensim.corpora.dictionary import Dictionary
from gensim.models import TfidfModel
from nltk.stem.snowball import SnowballStemmer as Stemmer

stemmer = Stemmer("YOUR_LANG") # see nltk.stem.snowball doc

stopWords = {"YOUR_STOPWORDS_FOR_LANG"} # as a set

docs = ["LIST_OF_STR"]

def tokenize(text):
    """
    return list of str from a str
    """
    # keep lowercase alphanums and "-" but not "_"
    return [w for w in re.split(r"_+|[^\w-]+", text.lower()) if w not in stopWords]

docs = [tokenize(doc) for doc in docs]
phrases = Phrases(docs)
bigrams = Phraser(phrases)
corpus = [[stemmer.stem(w) for w in bigrams[doc]] for doc in docs]
dictionary = Dictionary(corpus)
# and here is your tfidf model:
tfidf = TfidfModel(dictionary=dictionary, normalize=True)