从文档术语矩阵计算前n个单词对共现

时间:2018-07-03 17:47:44

标签: python matrix scikit-learn gensim text-analysis

我用gensim创建了一个单词袋模型。尽管实际上要长得多,但这是使用Gensim在标记化文本上创建一袋单词文档术语矩阵时输出的格式:

public class DataImporter implements ImporterProxy{

    private final PublishSubject<Message> dataToImportSubject;
    private final ConnectableObservable<Message> dataToImportObservable;

这是一个稀疏矩阵表示形式,据我了解,其他库也以类似的方式表示文档项矩阵。如果文档项矩阵是非稀疏的(意味着零项也存在),我知道我只需要(AT * A),因为A的维数(文档数乘以术语数) ,因此将两者相乘将得出“共现”一词。最终,我想获得前n个同时出现的词(因此获得在相同文本中一起出现的前n个词对)。我将如何实现?我不喜欢Gensim来创建BOW模型。如果像sklearn这样的其他图书馆可以更轻松地做到这一点,我会很开放。如果有任何有关此问题的建议/帮助/代码,我将不胜感激!谢谢!

1 个答案:

答案 0 :(得分:2)

编辑:这是您如何实现所要求的矩阵乘法的方法。免责声明:对于很大的语料库,这可能不可行。

Sklearn:

from sklearn.feature_extraction.text import CountVectorizer

Doc1 = 'Wimbledon is one of the four Grand Slam tennis tournaments, the others being the Australian Open, the French Open and the US Open.'
Doc2 = 'Since the Australian Open shifted to hardcourt in 1988, Wimbledon is the only major still played on grass'
docs = [Doc1, Doc2]

# Instantiate CountVectorizer and apply it to docs
cv = CountVectorizer()
doc_cv = cv.fit_transform(docs)

# Display tokens
cv.get_feature_names()

# Display tokens (dict keys) and their numerical encoding (dict values)
cv.vocabulary_

# Matrix multiplication of the term matrix
token_mat = doc_cv.toarray().T @ doc_cv.toarray()

Gensim:

import gensim as gs
import numpy as np

cp = [[(0, 2),
  (1, 1),
  (2, 1),
  (3, 1),
  (4, 11),
  (7, 1),
  (11, 2),
  (13, 3),
  (22, 1),
  (26, 1),
  (30, 1)],
 [(4, 31),
  (8, 2),
  (13, 2),
  (16, 2),
  (17, 2),
  (26, 1),
  (28, 4),
  (29, 1),
  (30, 1)]]

# Convert to a dense matrix and perform the matrix multiplication
mat_1 = gs.matutils.sparse2full(cp[0], max(cp[0])[0]+1).reshape(1, -1)
mat_2 = gs.matutils.sparse2full(cp[1], max(cp[0])[0]+1).reshape(1, -1)
mat = np.append(mat_1, mat_2, axis=0)
mat_product = mat.T @ mat

对于连续出现的单词,您可以为一组文档准备一个双字组列表,然后使用python的Counter计算双字组的出现次数。这是使用nltk的示例。

import nltk
from nltk.util import ngrams
from nltk.stem import WordNetLemmatizer
from nltk.corpus import stopwords
from collections import Counter

stop_words = set(stopwords.words('english'))

# Get the tokens from the built-in collection of presidential inaugural speeches
tokens = nltk.corpus.inaugural.words()

# Futher text preprocessing
tokens = [t.lower() for t in tokens if t not in stop_words]
word_l = WordNetLemmatizer()
tokens = [word_l.lemmatize(t) for t in tokens if t.isalpha()]

# Create bigram list and count bigrams
bi_grams = list(ngrams(tokens, 2)) 
counter = Counter(bi_grams)

# Show the most common bigrams
counter.most_common(5)
Out[36]: 
[(('united', 'state'), 153),
 (('fellow', 'citizen'), 116),
 (('let', 'u'), 99),
 (('i', 'shall'), 96),
 (('american', 'people'), 40)]

# Query the occurrence of a specific bigram
counter[('great', 'people')]
Out[37]: 7