试图用gensim模仿Scikit ngram

时间:2017-05-11 14:37:58

标签: python scikit-learn gensim

我试图用gensim模仿CountVectorizer()中的n_gram参数。我的目标是能够将LDA与Scikit或Gensim一起使用,并找到非常相似的双字母组合。

例如,我们可以使用scikit找到以下bigrams:“abc computer”,“binary unordered”和gensim“A survey”,“Graph minors”......

我在下面附上了我的代码,根据双字母/ unigrams对Gensim和Scikit进行了比较。

感谢您的帮助

documents = [["Human" ,"machine" ,"interface" ,"for" ,"lab", "abc" ,"computer" ,"applications"],
      ["A", "survey", "of", "user", "opinion", "of", "computer", "system", "response", "time"],
      ["The", "EPS", "user", "interface", "management", "system"],
      ["System", "and", "human", "system", "engineering", "testing", "of", "EPS"],
      ["Relation", "of", "user", "perceived", "response", "time", "to", "error", "measurement"],
      ["The", "generation", "of", "random", "binary", "unordered", "trees"],
      ["The", "intersection", "graph", "of", "paths", "in", "trees"],
      ["Graph", "minors", "IV", "Widths", "of", "trees", "and", "well", "quasi", "ordering"],
      ["Graph", "minors", "A", "survey"]]

使用gensim模型我们可以找到48个独特的令牌,我们可以用print打印unigram / bigrams(dictionary.token2id)

# 1. Gensim
from gensim.models import Phrases

# Add bigrams and trigrams to docs (only ones that appear 20 times or more).
bigram = Phrases(documents, min_count=1)
for idx in range(len(documents)):
    for token in bigram[documents[idx]]:
        if '_' in token:
            # Token is a bigram, add to document.
            documents[idx].append(token)

documents = [[doc.replace("_", " ") for doc in docs] for docs in documents]
print(documents)

dictionary = corpora.Dictionary(documents)
print(dictionary.token2id)

使用scikit 96独特的令牌,我们可以用print(词汇)打印scikit的词汇

# 2. Scikit
import re
token_pattern = re.compile(r"\b\w\w+\b", re.U)

def custom_tokenizer( s, min_term_length = 1 ):
    """
    Tokenizer to split text based on any whitespace, keeping only terms of at least a certain length which start with an alphabetic character.
    """
    return [x.lower() for x in token_pattern.findall(s) if (len(x) >= min_term_length and x[0].isalpha() ) ]

from sklearn.feature_extraction.text import CountVectorizer

def preprocess(docs, min_df = 1, min_term_length = 1, ngram_range = (1,1), tokenizer=custom_tokenizer ):
    """
    Preprocess a list containing text documents stored as strings.
    doc : list de string (pas tokenizé)
    """
    # Build the Vector Space Model, apply TF-IDF and normalize lines to unit length all in one call
    vec = CountVectorizer(lowercase=True,
                      strip_accents="unicode",
                      tokenizer=tokenizer,
                      min_df = min_df,
                      ngram_range = ngram_range,
                      stop_words = None
                     ) 
    X = vec.fit_transform(docs)
    vocab = vec.get_feature_names()

    return (X,vocab)

docs_join = list()

for i in documents:
    docs_join.append(' '.join(i))

(X, vocab) = preprocess(docs_join, ngram_range = (1,2))

print(vocab)

1 个答案:

答案 0 :(得分:1)

gensim Phrases类旨在"自动检测句子流中的常用短语(多字表达式)。" 因此,它只会让你出现比预期更频繁的双胞胎"。这就是为什么使用gensim套餐时,您只会获得以下几个问候:'response time''Graph minors''A survey'

如果你看bigram.vocab,你会发现这些双胞胎出现了2次而其他所有双胞胎只出现过一次。

scikit-learn' CountVectorizer课程为您提供了所有的双字母组合。