使用vocabulary_id和相应的tfidf分数将文本语料库转换为文本文档

时间:2016-11-22 12:39:10

标签: python machine-learning text-mining tf-idf

我有一个文本语料库,上面有5个文件,每个文件都是用/ n分开的。我想为文档中的每个单词提供一个id,并计算其各自的tfidf分数。 例如,假设我们有一个名为" corpus.txt"的文本语料库。如下: -

"堆栈 过流 文本矢量化scikit python scipy稀疏csr" 在使用

计算tfidf时
mylist =list("corpus.text")
vectorizer= CountVectorizer
x_counts = vectorizer_train.fit_transform(mylist) 
tfidf_transformer = TfidfTransformer()
x_tfidf = tfidf_transformer.fit_transform(x_counts)

输出

(0,12) 0.1234 #for 1st document
(1,8) 0.3456  #for 2nd  document
(1,4) 0.8976
(2,15) 0.6754 #for third document
(2,14) 0.2389
(2,3) 0.7823
(3,11) 0.9897 #for fourth document
(3,13) 0.8213
(3,5) 0.7722
(3,6) 0.2211
(4,7) 0.1100 # for fifth document
(4,10) 0.6690
(4,2) 0.0912
(4,9) 0.2345
(4,1) 0.1234

我将此scipy.sparse.csr矩阵转换为列表列表以删除文档ID,并使用以下内容仅保留vocabulary_id及其各自的tfidf分数:

m = x_tfidf.tocoo()
mydata = {k: v for k, v in zip(m.col, m.data)} 
key_val_pairs = [str(k) + ":" + str(v) for k, v in mydata.items()] 

但问题是我得到一个输出,其中vocabulary_id及其各自的tfidf分数按升序排列,没有任何文档引用。

例如,对于上面给出的语料库,我当前的输出(我已经使用json转储到文本文件中)看起来像:

1:0.1234
2:0.0912
3:0.7823
4:0.8976
5:0.7722
6:0.2211
7:0.1100
8:0.3456
9:0.2345
10:0.6690
11:0.9897
12:0.1234
13:0.8213
14:0.2389
15:0.6754

虽然我希望我的文本文件如下所示:

12:0.1234
8:0.3456 4:0.8976
15:0.1234 14:0.2389 3:0.7823
11:0.9897 13:0.8213 5:0.7722 6:0.2211
7:0.1100 10:0.6690 2:0.0912 9:0.2345 1:0.1234

任何想法如何完成它?

1 个答案:

答案 0 :(得分:1)

我想这就是你需要的。这里corpus是一组文档。

from sklearn.feature_extraction.text import TfidfVectorizer
corpus = ["stack over flow stack over flow text vectorization scikit", "stack over flow"]

vectorizer = TfidfVectorizer()
x = vectorizer.fit_transform(corpus) # corpus is a collection of documents

print(vectorizer.vocabulary_) # vocabulary terms and their index
print(x) # tf-idf weights for each terms belong to a particular document

打印:

{'vectorization': 5, 'text': 4, 'over': 1, 'flow': 0, 'stack': 3, 'scikit': 2}
  (0, 2)    0.33195438857 # first document, word = scikit
  (0, 5)    0.33195438857 # word = vectorization
  (0, 4)    0.33195438857 # word = text
  (0, 0)    0.472376562969 # word = flow
  (0, 1)    0.472376562969 # word = over
  (0, 3)    0.472376562969 # word = stack
  (1, 0)    0.57735026919 # second document
  (1, 1)    0.57735026919
  (1, 3)    0.57735026919

根据这些信息,您可以按照以下方式表示文档:

cx = x.tocoo()
doc_id = -1
for i,j,v in zip(cx.row, cx.col, cx.data):
    if doc_id == -1:
        print(str(j) + ':' + "{:.4f}".format(v), end=' ')
    else:
        if doc_id != i:
            print()
        print(str(j) + ':' + "{:.4f}".format(v), end=' ')
    doc_id = i

打印:

2:0.3320 5:0.3320 4:0.3320 0:0.4724 1:0.4724 3:0.4724 
0:0.5774 1:0.5774 3:0.5774