我使用scikit-learn 14.1中的sklearn.grid_search.RandomizedSearchCV类,运行以下代码时出错:
X, y = load_svmlight_file(inputfile)
min_max_scaler = preprocessing.MinMaxScaler()
X_scaled = min_max_scaler.fit_transform(X.toarray())
parameters = {'kernel':'rbf', 'C':scipy.stats.expon(scale=100), 'gamma':scipy.stats.expon(scale=.1)}
svr = svm.SVC()
classifier = grid_search.RandomizedSearchCV(svr, parameters, n_jobs=8)
classifier.fit(X_scaled, y)
当我将n_jobs参数设置为大于1时,我得到以下错误输出:
Traceback (most recent call last):
File "./svm_training.py", line 185, in <module>
main(sys.argv[1:])
File "./svm_training.py", line 63, in main
gridsearch(inputfile, kerneltype, parameterfile)
File "./svm_training.py", line 85, in gridsearch
classifier.fit(X_scaled, y)
File "/usr/local/lib/python2.7/dist-packages/scikit_learn-0.14.1-py2.7-linux- x86_64.egg/sklearn/grid_search.py", line 860, in fit
return self._fit(X, y, sampled_params)
File "/usr/local/lib/python2.7/dist-packages/scikit_learn-0.14.1-py2.7-linux-x86_64.egg/sklearn/grid_search.py", line 493, in _fit
for parameters in parameter_iterable
File "/usr/local/lib/python2.7/dist-packages/scikit_learn-0.14.1-py2.7-linux-x86_64.egg/sklearn/externals/joblib/parallel.py", line 519, in __call__
self.retrieve()
File "/usr/local/lib/python2.7/dist-packages/scikit_learn-0.14.1-py2.7-linux-x86_64.egg/sklearn/externals/joblib/parallel.py", line 419, in retrieve
self._output.append(job.get())
File "/usr/lib/python2.7/multiprocessing/pool.py", line 558, in get
raise self._value
SystemError: NULL result without error in PyObject_Call
它似乎与python多处理功能有关,但除了只是手动实现参数搜索的并行化之外,我不知道如何解决它。尝试并行化随机参数搜索有没有人有类似的问题,因为他们能够解决?
答案 0 :(得分:1)
事实证明问题在于使用MinMaxScaler。由于MinMaxScaler只接受密集数组,因此我在缩放之前将特征向量的稀疏表示转换为密集数组。由于特征向量有数千个元素,我的假设是密集数组在尝试并行化参数搜索时会导致内存错误。相反,我切换到StandardScaler,它接受稀疏数组作为输入,并且应该更好地用于我的问题空间。