如何使用Countvectorizer()和TfidfTransformer()在sklearn中保存分类器

时间:2019-09-20 00:20:01

标签: python-3.x scikit-learn

下面是分类器的一些代码。我使用pickle保存和加载此page中指示的分类器。但是,当我加载它以使用它时,我无法使用CountVectorizer()和TfidfTransformer()将原始文本转换为分类器可以使用的向量。

我唯一能够使用它的就是训练分类器后立即分析文本,如下所示。

import os
import sklearn
from sklearn.datasets import load_files

from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.model_selection import train_test_split
from sklearn.naive_bayes import MultinomialNB
from sklearn.metrics import confusion_matrix

from sklearn.feature_extraction.text import CountVectorizer
import nltk

import pandas
import pickle

class Classifier:

    def __init__(self):

        self.moviedir = os.getcwd() + '/txt_sentoken'

    def Training(self):

        # loading all files. 
        self.movie = load_files(self.moviedir, shuffle=True)


        # Split data into training and test sets
        docs_train, docs_test, y_train, y_test = train_test_split(self.movie.data, self.movie.target, 
                                                                  test_size = 0.20, random_state = 12)

        # initialize CountVectorizer
        self.movieVzer = CountVectorizer(min_df=2, tokenizer=nltk.word_tokenize, max_features=5000)

        # fit and tranform using training text 
        docs_train_counts = self.movieVzer.fit_transform(docs_train)


        # Convert raw frequency counts into TF-IDF values
        self.movieTfmer = TfidfTransformer()
        docs_train_tfidf = self.movieTfmer.fit_transform(docs_train_counts)

        # Using the fitted vectorizer and transformer, tranform the test data
        docs_test_counts = self.movieVzer.transform(docs_test)
        docs_test_tfidf = self.movieTfmer.transform(docs_test_counts)

        # Now ready to build a classifier. 
        # We will use Multinominal Naive Bayes as our model


        # Train a Multimoda Naive Bayes classifier. Again, we call it "fitting"
        self.clf = MultinomialNB()
        self.clf.fit(docs_train_tfidf, y_train)


        # save the model
        filename = 'finalized_model.pkl'
        pickle.dump(self.clf, open(filename, 'wb'))

        # Predict the Test set results, find accuracy
        y_pred = self.clf.predict(docs_test_tfidf)

        # Accuracy
        print(sklearn.metrics.accuracy_score(y_test, y_pred))

        self.Categorize()

    def Categorize(self):
        # very short and fake movie reviews
        reviews_new = ['This movie was excellent', 'Absolute joy ride', 'It is pretty good', 
                      'This was certainly a movie', 'I fell asleep halfway through', 
                      "We can't wait for the sequel!!", 'I cannot recommend this highly enough', 'What the hell is this shit?']

        reviews_new_counts = self.movieVzer.transform(reviews_new)         # turn text into count vector
        reviews_new_tfidf = self.movieTfmer.transform(reviews_new_counts)  # turn into tfidf vector


        # have classifier make a prediction
        pred = self.clf.predict(reviews_new_tfidf)

        # print out results
        for review, category in zip(reviews_new, pred):
            print('%r => %s' % (review, self.movie.target_names[category]))

2 个答案:

答案 0 :(得分:1)

之所以发生这种情况,是因为您不仅应该保存分类器,还应该保存矢量器。否则,您将在看不见的数据上对矢量化器进行再训练,这些数据显然不包含与训练数据完全相同的词,并且维数将发生变化。这是一个问题,因为您的分类器期望提供某种输入格式。

因此,解决问题的方法非常简单:在使用矢量化程序之前,还应将其保存为泡菜文件,并与分类器一起加载。

注意:为避免保存和加载两个对象,可以考虑将它们放到pipeline中,这是等效的。

答案 1 :(得分:0)

在MaximeKan的建议下,我研究了保存所有3个文件的方法。

保存模型和矢量化器

import pickle

with open(filename, 'wb') as fout:
    pickle.dump((movieVzer, movieTfmer, clf), fout)

加载模型和矢量化器以供使用

import pickle

with open('finalized_model.pkl', 'rb') as f:
    movieVzer, movieTfmer, clf = pickle.load(f)