如何存储TfidfVectorizer以供将来在scikit-learn中使用?

时间:2015-09-24 15:14:08

标签: python python-3.x scikit-learn tf-idf joblib

我有一个TfidfVectorizer,可以对文章集进行矢量化,然后选择要素。

vectroizer = TfidfVectorizer()
X_train = vectroizer.fit_transform(corpus)
selector = SelectKBest(chi2, k = 5000 )
X_train_sel = selector.fit_transform(X_train, y_train)

现在,我想存储它并在其他程序中使用它。我不想在训练数据集上重新运行TfidfVectorizer()和功能选择器。我怎么做?我知道如何使用joblib使模型持久化,但我想知道这是否与使模型持久化相同。

3 个答案:

答案 0 :(得分:9)

您可以简单地使用内置的pickle lib:

pickle.dump(vectorizer, open("vectorizer.pickle", "wb"))
pickle.dump(selector, open("selector.pickle", "wb"))

并加载:

vectorizer = pickle.load(open("vectorizer.pickle"), "rb"))
selector = pickle.load(open("selector.pickle"), "rb"))

Pickle会将对象序列化到磁盘并在需要时再次将它们加载到内存中

pickle lib docs

答案 1 :(得分:3)

以下是使用joblib的答案:

joblib.dump(vectorizer, 'vectroizer.pkl')
joblib.dump(selector, 'selector.pkl')

稍后,我可以加载它并准备好了:

vectorizer = joblib.load('vectorizer.pkl')
selector = joblib.load('selector.pkl')

test = selector.trasnform(vectorizer.transform(['this is test']))

答案 2 :(得分:3)

“使对象持久化”基本上意味着您将转储存储在内存中的二进制代码,该代码表示​​硬盘驱动器上的文件中的对象,以便稍后在您的程序或任何其他程序中对象可以从硬盘驱动器中的文件重新加载到内存中。

scikit-learn包含joblib或stdlib picklecPickle都能胜任。 我倾向于选择cPickle,因为它明显更快。使用ipython's %timeit command

>>> from sklearn.feature_extraction.text import TfidfVectorizer as TFIDF
>>> t = TFIDF()
>>> t.fit_transform(['hello world'], ['this is a test'])

# generic serializer - deserializer test
>>> def dump_load_test(tfidf, serializer):
...:    with open('vectorizer.bin', 'w') as f:
...:        serializer.dump(tfidf, f)
...:    with open('vectorizer.bin', 'r') as f:
...:        return serializer.load(f)

# joblib has a slightly different interface
>>> def joblib_test(tfidf):
...:    joblib.dump(tfidf, 'tfidf.bin')
...:    return joblib.load('tfidf.bin')

# Now, time it!
>>> %timeit joblib_test(t)
100 loops, best of 3: 3.09 ms per loop

>>> %timeit dump_load_test(t, pickle)
100 loops, best of 3: 2.16 ms per loop

>>> %timeit dump_load_test(t, cPickle)
1000 loops, best of 3: 879 µs per loop

现在,如果要在单个文件中存储多个对象,可以轻松创建数据结构来存储它们,然后转储数据结构本身。这适用于tuplelistdict。 从你的问题的例子:

# train
vectorizer = TfidfVectorizer()
X_train = vectorizer.fit_transform(corpus)
selector = SelectKBest(chi2, k = 5000 )
X_train_sel = selector.fit_transform(X_train, y_train)

# dump as a dict
data_struct = {'vectorizer': vectorizer, 'selector': selector}
# use the 'with' keyword to automatically close the file after the dump
with open('storage.bin', 'wb') as f: 
    cPickle.dump(data_struct, f)

稍后或在另一个程序中,以下语句将带回程序内存中的数据结构:

# reload
with open('storage.bin', 'rb') as f:
    data_struct = cPickle.load(f)
    vectorizer, selector = data_struct['vectorizer'], data_struct['selector']

# do stuff...
vectors = vectorizer.transform(...)
vec_sel = selector.transform(vectors)