Scikit Learn TfidfVectorizer:如何获得具有最高tf-idf分数的前n个术语

时间:2015-12-11 20:39:36

标签: python nlp nltk tf-idf

我正在研究关键字提取问题。考虑一般情况

    tfidf = TfidfVectorizer(tokenizer=tokenize, stop_words='english')
    t="""Two Travellers, walking in the noonday sun, sought the shade of a widespreading tree to rest. As they lay looking up among the pleasant leaves, they saw that it was a Plane Tree.

"How useless is the Plane!" said one of them. "It bears no fruit whatever, and only serves to litter the ground with leaves."

"Ungrateful creatures!" said a voice from the Plane Tree. "You lie here in my cooling shade, and yet you say I am useless! Thus ungratefully, O Jupiter, do men receive their blessings!"

Our best blessings are often the least appreciated."""

tfs = tfidf.fit_transform(t.split(" "))
str = 'tree cat travellers fruit jupiter'
response = tfidf.transform([str])
feature_names = tfidf.get_feature_names()
for col in response.nonzero()[1]:
    print feature_names[col], ' - ', response[0, col]

这给了我

  (0, 28)   0.443509712811
  (0, 27)   0.517461475101
  (0, 8)    0.517461475101
  (0, 6)    0.517461475101
tree  -  0.443509712811
travellers  -  0.517461475101
jupiter  -  0.517461475101
fruit  -  0.517461475101

这很好。对于任何新文档,有没有办法获得最高tfidf得分的前n项?

4 个答案:

答案 0 :(得分:23)

你必须做一些歌曲和舞蹈,以使矩阵成为numpy数组,但这应该做你正在寻找的:

feature_array = np.array(tfidf.get_feature_names())
tfidf_sorting = np.argsort(response.toarray()).flatten()[::-1]

n = 3
top_n = feature_array[tfidf_sorting][:n]

这给了我:

array([u'fruit', u'travellers', u'jupiter'], 
  dtype='<U13')

argsort来电真的很有用,here are the docs for it。我们必须[::-1],因为argsort仅支持从小到大的排序。我们调用flatten将维度减少到1d,以便排序的索引可用于索引1d特征数组。请注意,只有在您正在测试一个文档时,包括对flatten的调用才会有效。

另外,另一方面,你的意思是tfs = tfidf.fit_transform(t.split("\n\n"))吗?否则,多行字符串中的每个术语都被视为&#34;文档&#34;。使用\n\n代替意味着我们实际上正在查看4个文档(每行一个),这在考虑tfidf时更有意义。

答案 1 :(得分:1)

使用稀疏矩阵本身(没有.toarray())的解决方案!

tfidf = TfidfVectorizer(stop_words='english')
corpus = [
    'I would like to check this document',
    'How about one more document',
    'Aim is to capture the key words from the corpus',
    'frequency of words in a document is called term frequency'
]

X = tfidf.fit_transform(corpus)
feature_names = np.array(tfidf.get_feature_names())


new_doc = ['can key words in this new document be identified?',
           'idf is the inverse document frequency caculcated for each of the words']
responses = tfidf.transform(new_doc)


def get_top_tf_idf_words(response, top_n=2):
    sorted_nzs = np.argsort(response.data)[:-(top_n+1):-1]
    return feature_names[response.indices[sorted_nzs]]

print([get_top_tf_idf_words(response,2) for response in responses])

#[array(['key', 'words'], dtype='<U9'),
 array(['frequency', 'words'], dtype='<U9')]

答案 2 :(得分:0)

除非我对这个问题有误解,否则我们不必那么花哨,只需在max_features中使用TfidfVectorizer(),然后使用所有主要功能中的.get_feature_names()。您可以阅读它here

top_n = 10
tfidf = TfidfVectorizer(tokenizer=tokenize, stop_words='english', 
                max_features = top_n)
print(tfidf.get_feature_names())

答案 3 :(得分:0)

这是一个快速代码: (documents 是一个列表)

def get_tfidf_top_features(documents,n_top=10):
  fidf_vectorizer = TfidfVectorizer(max_df=0.95, min_df=2, max_features=no_features, stop_words='english')
  tfidf = tfidf_vectorizer.fit_transform(documents)
  importance = np.argsort(np.asarray(tfidf.sum(axis=0)).ravel())[::-1]
  tfidf_feature_names = np.array(tfidf_vectorizer.get_feature_names())
  return tfidf_feature_names[importance[:n_top]]