加载泡菜错误:TfidfVectorizer - 词汇量不合适

时间:2017-06-30 05:06:28

标签: python python-3.x machine-learning scikit-learn anaconda

我正在制作文档分类器,这是我的代码:

import os
import io
import numpy 
from pandas import DataFrame
from sklearn.feature_extraction.text import CountVectorizer, 
TfidfTransformer, TfidfVectorizer
from sklearn.naive_bayes import MultinomialNB

def readFiles(path):
    for root, dirnames, filenames in os.walk(path):
        for filename in filenames:
            path = os.path.join(root, filename)

        inBody = False
        lines = []
        f = io.open(path, 'r', encoding='latin1')
        for line in f:
            if inBody:
                lines.append(line)
            elif line == '\n':
                inBody = True
        f.close()
        message = '\n'.join(lines)
        yield path, message


def dataFrameFromDirectory(path, classification):
    rows = []
    index = []
    for filename, message in readFiles(path):
    rows.append({'resume': message, 'class': classification})
    index.append(filename)

return DataFrame(rows, index=index)

data = DataFrame({'resume': [], 'class': []})

data = data.append(dataFrameFromDirectory(r'<path>', 'Yes'))
data = data.append(dataFrameFromDirectory(r'<path>', 'No'))

然后我分割数据,并使用Tfidf Vectorizer:

tf=TfidfVectorizer(min_df=1, stop_words='english')
data_traintf=tf.fit_transform(data_train)
mnb=MultinomialNB()
mnb.fit(data_traintf,class_train)

经过培训和测试,我将分类器保存为pickle文件:

import pickle

with open(r'clf.pkl','wb') as f:
  pickle.dump(mnb,f)

但是当我再次加载并尝试使用分类器时,我得到TfidfVectorizer - Vocabulary wasn't fitted错误。所以我尝试使用管道并保存了我的矢量图:

from sklearn.pipeline import Pipeline

classifier=Pipeline([('tfidf',tf),('multiNB',mnb)])

with open(r'clf_1.pkl','wb') as f:
    pickle.dump(classifier,f)

但我仍然得到同样的错误。可能出现什么问题?

编辑:pickle文件已成功存储,另一端我加载了文件:

import pickle

with open(r'clf_1.pkl','rb') as f:
    clf=pickle.load(f)

并创建了一个测试数据框。当我执行test_tf=tf.fit(test['resume'])时,它可以正常工作,但pred=clf.predict(test_tf)会给出错误TypeError: 'TfidfVectorizer' object is not iterable

我是否需要遍历包含大约15个对象的数据框?

0 个答案:

没有答案