在了解如何通过以下程序获取Tf-Idf时遇到问题:
我尝试使用site上给出的概念来计算文档2(a
)中'And_this_is_the_third_one.'
的值,但是我使用上述概念得出的'a'的值是
1/26 * log(4/1)
((('a'字符出现的次数)/(给定字符数 document)* log(#文档/#给定字符的文档 发生))
= 0.023156
但是从输出中可以看到,输出返回为0.2203。
from sklearn.feature_extraction.text import TfidfVectorizer
corpus = ['This_is_the_first_document.', 'This_document_is_the_second_document.', 'And_this_is_the_third_one.', 'Is_this_the_first_document?', ]
vectorizer = TfidfVectorizer(min_df=0.0, analyzer="char")
X = vectorizer.fit_transform(corpus)
print(vectorizer.get_feature_names())
print(vectorizer.vocabulary_)
m = X.todense()
print(m)
使用上述概念,我希望输出为0.023156。
输出为:
['.', '?', '_', 'a', 'c', 'd', 'e', 'f', 'h', 'i', 'm', 'n', 'o', 'r', 's', 't', 'u']
{'t': 15, 'h': 8, 'i': 9, 's': 14, '_': 2, 'e': 6, 'f': 7, 'r': 13, 'd': 5, 'o': 12, 'c': 4, 'u': 16, 'm': 10, 'n': 11, '.': 0, 'a': 3, '?': 1}
[[0.14540332 0. 0.47550697 0. 0.14540332 0.11887674
0.23775349 0.17960203 0.23775349 0.35663023 0.14540332 0.11887674
0.11887674 0.14540332 0.35663023 0.47550697 0.14540332]
[0.10814145 0. 0.44206359 0. 0.32442434 0.26523816
0.35365088 0. 0.17682544 0.17682544 0.21628289 0.26523816
0.26523816 0. 0.26523816 0.35365088 0.21628289]
[0.14061506 0. 0.57481012 0.22030066 0. 0.22992405
0.22992405 0. 0.34488607 0.34488607 0. 0.22992405
0.11496202 0.14061506 0.22992405 0.34488607 0. ]
[0. 0.2243785 0.46836004 0. 0.14321789 0.11709001
0.23418002 0.17690259 0.23418002 0.35127003 0.14321789 0.11709001
0.11709001 0.14321789 0.35127003 0.46836004 0.14321789]]
答案 0 :(得分:1)
如documentation中所述,TfidfVectorizer()
已对文档计数添加了平滑处理,并且对顶部tf-idf向量应用了l2
归一化。
(字符出现的次数)/(给定字符数 文档)*
日志(1 +#文档/ 1 +#文档中存在给定字符)+1)
此规范化默认为l2
,但是您可以使用参数norm
更改或删除此步骤。同样,平滑可以是
要了解如何计算确切分数,我将使用CountVectorizer()
来了解每个文档中每个字符的计数。
countVectorizer = CountVectorizer(analyzer='char')
tf = countVectorizer.fit_transform(corpus)
tf_df = pd.DataFrame(tf.toarray(),
columns= countVectorizer.get_feature_names())
tf_df
#output:
. ? _ a c d e f h i m n o r s t u
0 1 0 4 0 1 1 2 1 2 3 1 1 1 1 3 4 1
1 1 0 5 0 3 3 4 0 2 2 2 3 3 0 3 4 2
2 1 0 5 1 0 2 2 0 3 3 0 2 1 1 2 3 0
3 0 1 4 0 1 1 2 1 2 3 1 1 1 1 3 4 1
现在基于第二个文档立即应用基于sklearn实现的tf-idf加权!
v=[]
doc_id =2
for char in tf_df.columns:
#calculate tf - count of this char in the doc / total number chars in the doc
tf = tf_df.loc[doc_id,char]/tf_df.loc[doc_id,:].sum()
#number of documents in the corpus with smoothing
n_d = 1+ tf_df.shape[0]
#number of documents containing this char with smoothing
df_d_t = 1+ sum(tf_df.loc[:,char]>0)
#now calculate the idf with smoothing
idf = (np.log (n_d/df_d_t) + 1 )
#calculate the score now
v.append (tf*idf)
from sklearn.preprocessing import normalize
# normalize the vector with l2 norm and create a dataframe with feature_names
pd.DataFrame(normalize([v],norm='l2'),columns=vectorizer.get_feature_names())
#output:
. ? _ a c d e f h i m n o r s t u
0.140615 0.0 0.57481 0.220301 0.0 0.229924 0.229924 0.0 0.344886 0.344886 0.0 0.229924 0.114962 0.140615 0.229924 0.344886 0.0
您会发现char a
的得分与TfidfVectorizer()
的输出匹配!