我在np.array中有5个句子,我想找到最常见的n个出现的单词及其相对计数。例如,如果n为3,我希望使用3个最常用的单词。作为相对计数,我希望将单词出现的次数除以单词总数。我在下面有一个例子:
0 oh i am she cool though might off her a brownie lol
1 so trash wouldnt do colors better tweet
2 love monkey brownie as much as a tweet
3 monkey get this tweet around i think
4 saw a brownie to make me some monkey
在上一个问题的帮助下,我设法找到了最常用的单词
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
A = np.array(["oh i am she cool though might off her a brownie lol",
"so trash wouldnt do colors better tweet",
"love monkey brownie as much as a tweet",
"monkey get this tweet around i think",
"saw a brownie to make me some monkey" ])
n = 3
vectorizer = CountVectorizer()
X = vectorizer.fit_transform(A)
vocabulary = vectorizer.get_feature_names()
ind = np.argsort(X.toarray().sum(axis=0))[-n:]
top_n_words = [vocabulary[a] for a in ind]
print (top_n_words)
['tweet', 'monkey', 'brownie']
但是,现在我想找到相对计数?有没有简单的pythonic方法来做到这一点?例如:
print (top_n_words_relative_count)
[3/42, 3/42, 3/42]
其中42是总字数。
答案 0 :(得分:1)
您可以使用collections.Counter
:
>>> A = np.array(["oh i am she cool though might off her a brownie lol",
"so trash wouldnt do colors better tweet",
"love monkey brownie as much as a tweet",
"monkey get this tweet around i think",
"saw a brownie to make me some monkey" ])
>>> B = ' '.join(A).split()
>>> top_n_words, top_n_words_count = zip(*Counter(B).most_common(3))
>>> top_n_words_relative_count = np.array(tom_n_words_count)/len(B)
>>> top_n_words
('a', 'brownie', 'tweet')
>>> top_n_words_relative_count
array([0.07142857, 0.07142857, 0.07142857])
如果要格式化:
>>> [f"{count}/{len(B)}" for count in top_n_words_count]
['3/42', '3/42', '3/42']
或者,如果您转到pandas
,则可以使用value_counts
和nlargest
:
>>> import pandas as pd
>>> B = pd.Series(' '.join(A).split())
>>> B = B.value_counts(normalize=True).nlargest(3)
monkey 0.071429
a 0.071429
tweet 0.071429
dtype: float64
>>> B.index.tolist()
['monkey', 'a', 'tweet']
>>> B.values.tolist()
[0.07142857142857142, 0.07142857142857142, 0.07142857142857142]