spaCy和scikit-learn矢量化器

时间:2017-07-19 16:38:25

标签: python scikit-learn nlp spacy

我根据他们的example编写了一个使用spaCy进行scikit-learn的引理标记器,它可以独立工作:

import spacy
from sklearn.feature_extraction.text import TfidfVectorizer

class LemmaTokenizer(object):
    def __init__(self):
        self.spacynlp = spacy.load('en')
    def __call__(self, doc):
        nlpdoc = self.spacynlp(doc)
        nlpdoc = [token.lemma_ for token in nlpdoc if (len(token.lemma_) > 1) or (token.lemma_.isalnum()) ]
        return nlpdoc

vect = TfidfVectorizer(tokenizer=LemmaTokenizer())
vect.fit(['Apples and oranges are tasty.'])
print(vect.vocabulary_)
### prints {'apple': 1, 'and': 0, 'tasty': 4, 'be': 2, 'orange': 3}

但是,在GridSearchCV中使用它会产生错误,下面是一个自包含的示例:

from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.svm import SVC
from sklearn.multiclass import OneVsRestClassifier
from sklearn.pipeline import Pipeline
from sklearn.grid_search import GridSearchCV

wordvect = TfidfVectorizer(analyzer='word', strip_accents='ascii', tokenizer=LemmaTokenizer())
classifier = OneVsRestClassifier(SVC(kernel='linear'))
pipeline = Pipeline([('vect', wordvect), ('classifier', classifier)])
parameters = {'vect__min_df': [1, 2], 'vect__max_df': [0.7, 0.8], 'classifier__estimator__C': [0.1, 1, 10]}
gs_clf = GridSearchCV(pipeline, parameters, n_jobs=7, verbose=1)

from sklearn.datasets import fetch_20newsgroups
categories = ['comp.graphics', 'rec.sport.baseball']
newsgroups = fetch_20newsgroups(remove=('headers', 'footers', 'quotes'), shuffle=True, categories=categories)
X = newsgroups.data
y = newsgroups.target
gs_clf = gs_clf.fit(X, y)

### AttributeError: 'spacy.tokenizer.Tokenizer' object has no attribute '_prefix_re'

当我在标记生成器的构造函数之外加载spacy时,不会出现错误,然后运行GridSearchCV

spacynlp = spacy.load('en')
    class LemmaTokenizer(object):
        def __call__(self, doc):
            nlpdoc = spacynlp(doc)
            nlpdoc = [token.lemma_ for token in nlpdoc if (len(token.lemma_) > 1) or (token.lemma_.isalnum()) ]
            return nlpdoc

但这意味着我n_jobs中的每个GridSearchCV都会访问并调用相同的spacynlp对象,它们会在这些作业中共享,这就留下了问题:

  1. 来自spacy.load('en')的spacynlp对象是否安全可供GridSearchCV中的多个作业使用?
  2. 这是在scikit-learn的tokenizer中实现对spacy的调用的正确方法吗?

1 个答案:

答案 0 :(得分:2)

您正在通过为网格中的每个参数设置运行Spacy来浪费时间。内存开销也很重要。您应该通过Spacy运行一次所有数据并将其保存到磁盘,然后使用读取预先模拟数据的简化矢量器。查看tokenizer的{​​{1}},analyserpreprocessor参数。有很多关于堆栈溢出的例子,展示了如何构建自定义矢量化器。