获取每个文档的最高学期-scikit TF-IDF

时间:2019-01-15 11:38:45

标签: python scikit-learn tf-idf

使用scikit's tf-idf vectorizer对多个文档进行矢量化处理后,是否有办法获得每个文档中最具“影响力”的术语?

我只找到了获得整个语料库(而不是每个文档)最“有影响力”的术语的方法。

2 个答案:

答案 0 :(得分:3)

Ami 的最后两个步骤中,只需添加另一种方法即可:

# Get a list of all the keywords by calling function
feature_names = np.array(count_vect.get_feature_names())
feature_names[X_train_tfidf.argmax(axis=1)]

答案 1 :(得分:2)

假设您从数据集开始:

from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfTransformer
import numpy as np
from sklearn.datasets import fetch_20newsgroups

d = fetch_20newsgroups()

使用计数矢量化器和tfidf:

count_vect = CountVectorizer()
X_train_counts = count_vect.fit_transform(d.data)
transformer = TfidfTransformer()
X_train_tfidf = transformer.fit_transform(X_train_counts)

现在您可以创建逆映射:

m = {v: k for (k, v) in count_vect.vocabulary_.items()}

这给出了每个文档有影响力的词:

[m[t] for t in np.array(np.argmax(X_train_tfidf, axis=1)).flatten()]