在Scikit Learn中通过已保存的训练分类器进行预测

时间:2015-10-07 13:06:03

标签: python machine-learning scikit-learn classification

我在Python中为Tweets编写了一个分类器,然后我以65500 2054246226 0.000000格式将其保存在磁盘上,因此我可以一次又一次地运行它,而无需每次都进行训练。这是代码:

.pkl

假设我有另一个Python文件,我想对Tweet进行分类。我该如何进行分类?

import pandas
import re
from sklearn.feature_extraction import FeatureHasher

from sklearn.feature_selection import SelectKBest
from sklearn.feature_selection import chi2

from sklearn import cross_validation

from sklearn.externals import joblib


#read the dataset of tweets

header_row=['sentiment','tweetid','date','query', 'user', 'text']
train = pandas.read_csv("training.data.csv",names=header_row)

#keep only the right columns

train = train[["sentiment","text"]]

#remove puctuation, special characters, numbers and lower case the text

def remove_spch(text):

    return re.sub("[^a-z]", ' ', text.lower())

train['text'] = train['text'].apply(remove_spch)


#Feature Hashing

def tokens(doc):
    """Extract tokens from doc.

    This uses a simple regex to break strings into tokens.
    """
    return (tok.lower() for tok in re.findall(r"\w+", doc))

n_features = 2**18
hasher = FeatureHasher(n_features=n_features, input_type="string", non_negative=True)
X = hasher.transform(tokens(d) for d in train['text'])

y = train['sentiment']

X_new = SelectKBest(chi2, k=20000).fit_transform(X, y)

a_train, a_test, b_train, b_test = cross_validation.train_test_split(X_new, y, test_size=0.2, random_state=42)

from sklearn.ensemble import RandomForestClassifier 

classifier=RandomForestClassifier(n_estimators=10)                  
classifier.fit(a_train.toarray(), b_train)                            
prediction = classifier.predict(a_test.toarray()) 

#Export the trained model to load it in another project

joblib.dump(classifier, 'my_model.pkl', compress=9)

直到from sklearn.externals import joblib model_clone = joblib.load('my_model.pkl') mytweet = 'Uh wow:@medium is doing a crowdsourced data-driven investigation tracking down a disappeared refugee boat' 我可以复制相同的程序将其添加到预测模型中,但是我遇到了无法计算最佳20k功能的问题。要使用SelectKBest,您需要添加功能和标签。既然,我想预测标签,我不能使用SelectKBest。那么,我怎样才能通过这个问题继续进行预测?

1 个答案:

答案 0 :(得分:5)

我支持@EdChum的评论

  

您通过对数据进行培训来构建模型,这些数据可能具有足够的代表性,可以应对看不见的数据

实际上,这意味着您需要将FeatureHasherSelectKBest同时应用于只有predict 的新数据。 (在新数据上重新训练FeatureHasher是错误的,因为通常它会产生不同的功能)。

要做到这一点

  • pickle FeatureHasherSelectKBest分开

或(更好)

  • 创建FeatureHasher,SelectKBest和Pipeline的{​​{1}}并挑选整个管道。然后,您可以加载此管道并对新数据使用RandomForestClassifier