在下面的脚本中,我发现GridSearchCV启动的作业似乎挂起了。
import json
import pandas as pd
import numpy as np
import unicodedata
import re
from sklearn.pipeline import Pipeline
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.decomposition import TruncatedSVD
from sklearn.linear_model import SGDClassifier
import sklearn.cross_validation as CV
from sklearn.grid_search import GridSearchCV
from nltk.stem import WordNetLemmatizer
# Seed for randomization. Set to some definite integer for debugging and set to None for production
seed = None
### Text processing functions ###
def normalize(string):#Remove diacritics and whatevs
return "".join(ch.lower() for ch in unicodedata.normalize('NFD', string) if not unicodedata.combining(ch))
wnl = WordNetLemmatizer()
def tokenize(string):#Ignores special characters and punct
return [wnl.lemmatize(token) for token in re.compile('\w\w+').findall(string)]
def ngrammer(tokens):#Gets all grams in each ingredient
max_n = 2
return [":".join(tokens[idx:idx+n]) for n in np.arange(1,1 + min(max_n,len(tokens))) for idx in range(len(tokens) + 1 - n)]
print("Importing training data...")
with open('/Users/josh/dev/kaggle/whats-cooking/data/train.json','rt') as file:
recipes_train_json = json.load(file)
# Build the grams for the training data
print('\nBuilding n-grams from input data...')
for recipe in recipes_train_json:
recipe['grams'] = [term for ingredient in recipe['ingredients'] for term in ngrammer(tokenize(normalize(ingredient)))]
# Build vocabulary from training data grams.
vocabulary = list({gram for recipe in recipes_train_json for gram in recipe['grams']})
# Stuff everything into a dataframe.
ids_index = pd.Index([recipe['id'] for recipe in recipes_train_json],name='id')
recipes_train = pd.DataFrame([{'cuisine': recipe['cuisine'], 'ingredients': " ".join(recipe['grams'])} for recipe in recipes_train_json],columns=['cuisine','ingredients'], index=ids_index)
# Extract data for fitting
fit_data = recipes_train['ingredients'].values
fit_target = recipes_train['cuisine'].values
# extracting numerical features from the ingredient text
feature_ext = Pipeline([('vect', CountVectorizer(vocabulary=vocabulary)),
('tfidf', TfidfTransformer(use_idf=True)),
('svd', TruncatedSVD(n_components=1000))
])
lsa_fit_data = feature_ext.fit_transform(fit_data)
# Build SGD Classifier
clf = SGDClassifier(random_state=seed)
# Hyperparameter grid for GRidSearchCV.
parameters = {
'alpha': np.logspace(-6,-2,5),
}
# Init GridSearchCV with k-fold CV object
cv = CV.KFold(lsa_fit_data.shape[0], n_folds=3, shuffle=True, random_state=seed)
gs_clf = GridSearchCV(
estimator=clf,
param_grid=parameters,
n_jobs=-1,
cv=cv,
scoring='accuracy',
verbose=2
)
# Fit on training data
print("\nPerforming grid search over hyperparameters...")
gs_clf.fit(lsa_fit_data, fit_target)
控制台输出是:
Importing training data...
Building n-grams from input data...
Performing grid search over hyperparameters...
Fitting 3 folds for each of 5 candidates, totalling 15 fits
[CV] alpha=1e-06 .....................................................
[CV] alpha=1e-06 .....................................................
[CV] alpha=1e-06 .....................................................
[CV] alpha=1e-05 .....................................................
[CV] alpha=1e-05 .....................................................
[CV] alpha=1e-05 .....................................................
[CV] alpha=0.0001 ....................................................
[CV] alpha=0.0001 ....................................................
然后它就会挂起。如果我在n_jobs=1
中设置GridSearchCV
,则脚本会按预期完成输出:
Importing training data...
Building n-grams from input data...
Performing grid search over hyperparameters...
Fitting 3 folds for each of 5 candidates, totalling 15 fits
[CV] alpha=1e-06 .....................................................
[CV] ............................................ alpha=1e-06 - 6.5s
[Parallel(n_jobs=1)]: Done 1 jobs | elapsed: 6.6s
[CV] alpha=1e-06 .....................................................
[CV] ............................................ alpha=1e-06 - 6.6s
[CV] alpha=1e-06 .....................................................
[CV] ............................................ alpha=1e-06 - 6.7s
[CV] alpha=1e-05 .....................................................
[CV] ............................................ alpha=1e-05 - 6.7s
[CV] alpha=1e-05 .....................................................
[CV] ............................................ alpha=1e-05 - 6.7s
[CV] alpha=1e-05 .....................................................
[CV] ............................................ alpha=1e-05 - 6.6s
[CV] alpha=0.0001 ....................................................
[CV] ........................................... alpha=0.0001 - 6.6s
[CV] alpha=0.0001 ....................................................
[CV] ........................................... alpha=0.0001 - 6.7s
[CV] alpha=0.0001 ....................................................
[CV] ........................................... alpha=0.0001 - 6.7s
[CV] alpha=0.001 .....................................................
[CV] ............................................ alpha=0.001 - 7.0s
[CV] alpha=0.001 .....................................................
[CV] ............................................ alpha=0.001 - 6.8s
[CV] alpha=0.001 .....................................................
[CV] ............................................ alpha=0.001 - 6.6s
[CV] alpha=0.01 ......................................................
[CV] ............................................. alpha=0.01 - 6.7s
[CV] alpha=0.01 ......................................................
[CV] ............................................. alpha=0.01 - 7.3s
[CV] alpha=0.01 ......................................................
[CV] ............................................. alpha=0.01 - 7.1s
[Parallel(n_jobs=1)]: Done 15 out of 15 | elapsed: 1.7min finished
单线程执行很快完成,所以我确定我给并行工作案例足够的时间来进行计算。
环境规格: MacBook Pro(15英寸,2010年中),2.4 GHz Intel Core i5,8 GB 1067 MHz DDR3,OSX 10.10.5,python 3.4.3,ipython 3.2.0,numpy v1.9.3,scipy 0.16.0,scikit-学习v0.16.1(python和包来自anaconda发行版)
其他一些评论:
我在这台计算机上始终使用n_jobs=-1
GridSearchCV
而没有问题,因此我的平台确实支持该功能。它通常一次有4个作业,因为我在这台机器上有4个核心(2个物理,但由于Mac超线程,4个“虚拟核心”)。但除非我误解了控制台输出,否则在这种情况下它会有8个作业没有任何返回。实时监视活动监视器中的CPU使用情况,4个作业启动,工作一点,然后完成(或死亡?),然后再启动4个,工作一点,然后完全闲置但坚持下去。
在任何时候我都没有看到明显的记忆压力。主进程最高约1GB真实内存,孩子处理大约600MB。当它们挂起时,真正的记忆可以忽略不计。
如果从特征提取管道中删除TruncatedSVD
步骤,则脚本可以正常处理多个作业。但请注意,此管道在网格搜索之前起作用,并且不属于GridSearchCV
作业。
此脚本适用于kaggle竞赛What's Cooking?,因此如果您想尝试使用我正在使用的相同数据运行它,您可以从那里抓取它。数据作为JSON对象数组出现。每个对象代表一个配方,并包含一个文本片段列表,这些片段是成分。由于每个样本都是文档的集合而不是单个文档,因此我最终不得不编写一些自己的n-gramming和tokenization逻辑,因为我无法弄清楚如何获得scikit的内置变换器 - 学习做我想要的。我怀疑这些是否重要,只是一个FYI。
我通常使用%run在iPython CLI中运行脚本,但是我直接从OSX bash终端使用python(3.4.3)运行它。
答案 0 :(得分:11)
如果njob> 1,这可能是GridSearchCV使用的多处理问题。因此,您可以尝试使用多线程来查看它是否正常工作,而不是使用多处理。
from sklearn.externals.joblib import parallel_backend
clf = GridSearchCV(...)
with parallel_backend('threading'):
clf.fit(x_train, y_train)
我使用带有njob> 1的GSV的估算器遇到了同样的问题,使用它在njob值上运行得很好。
PS:我不确定“线程”是否与所有估算器的“多处理”具有相同的优势。但理论上,如果您的估算器受GIL限制,“线程”不是一个很好的选择,但如果估算器是基于cython / numpy的,那么它将优于“多处理”
系统试穿:
MAC OS: 10.12.6
Python: 3.6
numpy==1.13.3
pandas==0.21.0
scikit-learn==0.19.1
答案 1 :(得分:1)
我相信我有类似的问题,罪魁祸首是突然的内存使用高峰。该过程将尝试分配内存并立即死亡,因为没有足够的可用
如果您可以访问具有更多可用内存(例如128-256GB)的计算机,则需要使用相同或更少数量的作业(n_jobs = 4)进行检查。 这就是我如何解决这个问题 - 只是将我的脚本移动到一个庞大的服务器上。
答案 2 :(得分:-1)
我能够通过明确设置随机种子来解决类似的问题:
np.random.seed(0)
。
我的问题是多次运行GSCV引起的,所以这可能不会直接适用于您的用例。