您好我试图了解scikit-learn如何计算矩阵中的TFIDF分数:文档1,功能6," wine":
test_doc = ['The wine was lovely', 'The red was delightful',
'Terrible choice of wine', 'We had a bottle of red']
# Create vectorizer
vec = TfidfVectorizer(stop_words='english')
# Feature vector
tfidf = vec.fit_transform(test_doc)
feature_names = vec.get_feature_names()
feature_matrix = tfidf.todense()
['bottle', 'choice', 'delightful', 'lovely', 'red', 'terrible', 'wine']
[[ 0. 0. 0. 0.78528828 0. 0. 0.6191303 ]
[ 0. 0. 0.78528828 0. 0.6191303 0. 0. ]
[ 0. 0.61761437 0. 0. 0. 0.61761437 0.48693426]
[ 0.78528828 0. 0. 0. 0.6191303 0. 0. ]]
我正在使用一个非常相似的问题的答案为自己计算: How areTF-IDF calculated by the scikit-learn TfidfVectorizer但是在他们的TFIDFVectorizer中,norm = None。
由于我使用的是默认设置norm = l2,这与norm = None有何不同,我如何为自己计算?