我正在尝试为使用keras构建的神经网络执行参数调整。这是我的代码,其中包含导致错误的行注释:
from sklearn.cross_validation import StratifiedKFold, cross_val_score
from sklearn import grid_search
from sklearn.metrics import classification_report
import multiprocessing
from keras.models import Sequential
from keras.layers import Dense
from sklearn.preprocessing import LabelEncoder
from keras.utils import np_utils
from keras.wrappers.scikit_learn import KerasClassifier
import numpy as np
def tuning(X_train,Y_train,X_test,Y_test):
in_size=X_train.shape[1]
num_cores=multiprocessing.cpu_count()
model = Sequential()
model.add(Dense(in_size, input_dim=in_size, init='uniform', activation='relu'))
model.add(Dense(8, init='uniform', activation='relu'))
model.add(Dense(1, init='uniform', activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
batch_size = [10, 20, 40, 60, 80, 100]
epochs = [10,20]
param_grid = dict(batch_size=batch_size, nb_epoch=epochs)
k_model = KerasClassifier(build_fn=model, verbose=0)
clf = grid_search.GridSearchCV(estimator=k_model, param_grid=param_grid, cv=StratifiedKFold(Y_train, n_folds=10, shuffle=True, random_state=1234),
scoring="accuracy", verbose=100, n_jobs=num_cores)
clf.fit(X_train, Y_train) #ERROR HERE
print("Best parameters set found on development set:")
print()
print(clf.best_params_)
print()
print("Grid scores on development set:")
print()
for params, mean_score, scores in clf.grid_scores_:
print("%0.3f (+/-%0.03f) for %r"
% (mean_score, scores.std() * 2, params))
print()
print("Detailed classification report:")
print()
print("The model is trained on the full development set.")
print("The scores are computed on the full evaluation set.")
print()
y_true, y_pred = Y_test, clf.predict(X_test)
print(classification_report(y_true, y_pred))
print()
这是错误报告:
clf.fit(X_train, Y_train)
File "/usr/local/lib/python2.7/dist-packages/sklearn/grid_search.py", line 804, in fit
return self._fit(X, y, ParameterGrid(self.param_grid))
File "/usr/local/lib/python2.7/dist-packages/sklearn/grid_search.py", line 553, in _fit
for parameters in parameter_iterable
File "/usr/local/lib/python2.7/dist-packages/sklearn/externals/joblib/parallel.py", line 800, in __call__
while self.dispatch_one_batch(iterator):
File "/usr/local/lib/python2.7/dist-packages/sklearn/externals/joblib/parallel.py", line 658, in dispatch_one_batch
self._dispatch(tasks)
File "/usr/local/lib/python2.7/dist-packages/sklearn/externals/joblib/parallel.py", line 566, in _dispatch
job = ImmediateComputeBatch(batch)
File "/usr/local/lib/python2.7/dist-packages/sklearn/externals/joblib/parallel.py", line 180, in __init__
self.results = batch()
File "/usr/local/lib/python2.7/dist-packages/sklearn/externals/joblib/parallel.py", line 72, in __call__
return [func(*args, **kwargs) for func, args, kwargs in self.items]
File "/usr/local/lib/python2.7/dist-packages/sklearn/cross_validation.py", line 1531, in _fit_and_score
estimator.fit(X_train, y_train, **fit_params)
File "/usr/local/lib/python2.7/dist-packages/keras/wrappers/scikit_learn.py", line 135, in fit
**self.filter_sk_params(self.build_fn.__call__))
TypeError: __call__() takes at least 2 arguments (1 given)
我错过了什么吗?网格搜索适用于随机森林,svm和逻辑回归。我只有神经网络的问题。
答案 0 :(得分:4)
这里的错误表明build_fn
需要有2个参数,如param_grid
中的参数数量所示。
因此,您需要明确定义一个新函数,并将其用作build_fn=make_model
def make_model(batch_size, nb_epoch):
model = Sequential()
model.add(Dense(in_size, input_dim=in_size, init='uniform', activation='relu'))
model.add(Dense(8, init='uniform', activation='relu'))
model.add(Dense(1, init='uniform', activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
return model
同时检查keras/examples/mnist_sklearn_wrapper.py
其中GridSearchCV
用于超参数搜索。
答案 1 :(得分:0)
我想你可能会使用scikit-learn 0.16或更早的版本 昨天我刚遇到同样的问题,经过一些解决方法后我才知道升级scikit-learn从0.16到0.18解决了这个问题。
clf.fit(X_train, Y_train) #SHOULD WORK with scikit-learn 0.18
0.18与0.16不同的另一件事是GridSearchCV没有提出sklearn.grid_search
但是sklearn.model_selection
答案 2 :(得分:0)
我希望您现在已经解决了这个问题。
a)我想问题是您没有在包装函数tuning()
的末尾返回模型。使用return model
b)k_model = KerasClassifier(build_fn=model, verbose=0)
根据您的函数命名方式,我认为应该build_fn=tuning
。
c)方法的签名def tuning(X_train,Y_train,X_test,Y_test)
不正确。传递给函数的参数是每次迭代后都需要替换的参数(即您在param_grid
中指定的超参数)。请改用def tuning(batch_size, nb_epoch)
。
我希望这会有所帮助!