为TfidfVectorizer使用准备数据(scikitlearn)

时间:2017-01-11 11:29:57

标签: python-2.7 unicode scikit-learn tf-idf

我正在尝试使用sklearn的TfIdfVectorizer。我遇到了麻烦,因为我的输入可能与TfIdfVectorizer的需求不匹配。我有一堆JSON我加载并附加到列表中,我现在希望它成为TfIdfVectorizer使用的语料库。

代码:

import json
import pandas
from sklearn.feature_extraction.text import TfidfVectorizer

train=pandas.read_csv("train.tsv",  sep='\t')
documents=[]

for i,row in train.iterrows():
        data = json.loads(row['boilerplate'].lower()) 
        documents.append(data['body'])
vectorizer=TfidfVectorizer(min_df=1)
X = vectorizer.fit_transform(documents)
idf = vectorizer.idf_
print dict(zip(vectorizer.get_feature_names(), idf))

我收到以下错误:

Traceback (most recent call last):

  File "<ipython-input-56-94a6b95b0745>", line 1, in <module>
    runfile('C:/Users/Guinea Pig/Downloads/try.py', wdir='C:/Users/Guinea Pig/Downloads')

  File "D:\Anaconda\lib\site-packages\spyderlib\widgets\externalshell\sitecustomize.py", line 585, in runfile
    execfile(filename, namespace)

  File "C:/Users/Guinea Pig/Downloads/try.py", line 19, in <module>
    X = vectorizer.fit_transform(documents)

  File "D:\Anaconda\lib\site-packages\sklearn\feature_extraction\text.py", line 1219, in fit_transform
    X = super(TfidfVectorizer, self).fit_transform(raw_documents)

  File "D:\Anaconda\lib\site-packages\sklearn\feature_extraction\text.py", line 780, in fit_transform
    vocabulary, X = self._count_vocab(raw_documents, self.fixed_vocabulary)

  File "D:\Anaconda\lib\site-packages\sklearn\feature_extraction\text.py", line 715, in _count_vocab
    for feature in analyze(doc):

  File "D:\Anaconda\lib\site-packages\sklearn\feature_extraction\text.py", line 229, in <lambda>
    tokenize(preprocess(self.decode(doc))), stop_words)

  File "D:\Anaconda\lib\site-packages\sklearn\feature_extraction\text.py", line 195, in <lambda>
    return lambda x: strip_accents(x.lower())

AttributeError: 'NoneType' object has no attribute 'lower'

我发现文档数组由Unicode对象组成,而不是字符串对象,但我似乎无法解决这个问题。蚂蚁的想法?

1 个答案:

答案 0 :(得分:0)

最终我用过:

str_docs=[]
for item in documents:
    str_docs.append(documents[i].encode('utf-8'))

作为补充