scikit-learn添加培训数据

时间:2016-07-22 21:05:43

标签: python machine-learning scipy scikit-learn

我正在查看sklearn here处提供的培训数据。根据文档,它包含20类文档,基于一些新闻组集合。它对分类属于这些类别的文档做得相当不错。但是,我需要添加更多类别的文章,如板球,足球,核物理等。

我准备好了每个类的文档集,例如sports -> cricketcooking -> French等。如何在sklearn中添加这些文档和类,以便现在返回的接口20个课程还会返回20个以及新课程吗?如果我需要通过SVMNaive Bayes进行一些培训,在将其添加到数据集之前,我该在哪里进行培训?

1 个答案:

答案 0 :(得分:2)

假设您的其他数据具有以下目录结构(如果没有,那么这应该是您的第一步,因为它可以让您的生活更轻松,因为您可以使用sklearn API来获取数据,请参阅here):

additional_data
      |
      |-> sports.cricket
                |
                |-> file1.txt
                |-> file2.txt
                |-> ...
      |
      |-> cooking.french
                |
                |-> file1.txt
                |-> ...
       ...

转到python,加载两个数据集(假设您的其他数据采用上述格式,并以/path/to/additional_data为根)

import os

from sklearn import cross_validation
from sklearn.datasets import fetch_20newsgroups
from sklearn.datasets import load_files
from sklearn.externals import joblib
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.metrics import accuracy_score
from sklearn.naive_bayes import MultinomialNB
from sklearn.pipeline import Pipeline
import numpy as np

# Note if you have a pre-defined training/testing split in your additional data, you would merge them with the corresponding 'train' and 'test' subsets of 20news
news_data = fetch_20newsgroups(subset='all')
additional_data = load_files(container_path='/path/to/additional_data', encoding='utf-8')

# Both data objects are of type `Bunch` and therefore can be relatively straightforwardly merged

# Merge the two data files
'''
The Bunch object contains the following attributes: `dict_keys(['target_names', 'description', 'DESCR', 'target', 'data', 'filenames'])`
The interesting ones for our purposes are 'data' and 'filenames'
'''
all_filenames = np.concatenate((news_data.filenames, additional_data.filenames)) # filenames is a numpy array
all_data = news_data.data + additional_data.data # data is a standard python list

merged_data_path = '/path/to/merged_data'

'''
The 20newsgroups data has a filename a la '/path/to/scikit_learn_data/20news_home/20news-bydate-test/rec.sport.hockey/54367'
So depending on whether you want to keep the sub directory structure of the train/test splits or not, 
you would either need the last 2 or 3 parts of the path
'''
for content, f in zip(all_data, all_filenames):
    # extract sub path
    sub_path, filename = f.split(os.sep)[-2:]

    # Create output directory if not exists
    p = os.path.join(merged_data_path, sub_path)
    if (not os.path.exists(p)):
        os.makedirs(p)

    # Write data to file
    with open(os.path.join(p, filename), 'w') as out_file:
        out_file.write(content)

# Now that everything is stored at `merged_data_path`, we can use `load_files` to fetch the dataset again, which now includes everything from 20newsgroups and your additional data
all_data = load_files(container_path=merged_data_path)

'''
all_data is yet another `Bunch` object:
    * `data` contains the data
    * `target_names` contains the label names
    * `target contains` the labels in numeric format
    * `filenames` contains the paths of each individual document

thus, running a classifier over the data is straightforward
'''
vec = CountVectorizer()
X = vec.fit_transform(all_data.data)

# We want to create a train/test split for learning and evaluating a classifier (supposing we haven't created a pre-defined train/test split encoded in the directory structure)
X_train, X_test, y_train, y_test = cross_validation.train_test_split(X, all_data.target, test_size=0.2)

# Create & fit the MNB model
mnb = MultinomialNB()
mnb.fit(X_train, y_train)

# Evaluate Accuracy
y_predicted = mnb.predict(X_test)

print('Accuracy: {}'.format(accuracy_score(y_test, y_predicted)))

# Alternatively, the vectorisation and learning can be packaged into a pipeline and serialised for later use
pipeline = Pipeline([('vec', CountVectorizer()), ('mnb', MultinomialNB())])

# Run the vectorizer and train the classifier on all available data
pipeline.fit(all_data.data, all_data.target)

# Serialise the classifier to disk
joblib.dump(pipeline, '/path/to/model_zoo/mnb_pipeline.joblib')

# If you get some more data later on, you can deserialise the model and run them through the pipeline again
p = joblib.load('/path/to/model_zoo/mnb_pipeline.joblib')

docs_new = ['God is love', 'OpenGL on the GPU is fast']

y_predicted = p.predict(docs_new)
print('Predicted labels: {}'.format(np.array(all_data.target_names)[y_predicted]))