我有一个更大的问题,我想使用sklearn.model_selection.GridSearchCV
来搜索超参数以找到神经网络拟合问题的“最佳”集合。
我已经尝试了许多此代码的示例,简单的代码似乎可以按预期工作,但是更复杂的代码却为grid_result.cv_results_['mean_test_score']
返回0,除非我误解了{{1 }}(我认为与keras返回的mean_test_score
有关)。
这是实际代码的大幅简化版本,本质上是尝试在多指数集上训练NN:
accuracy
我期望最终输出在最终> 0中具有平均值。下面是一个示例输出。显然,由于上述代码中的训练集很少,因此准确性令人震惊。对于下面的每3个运行,from keras.models import Sequential
from keras.layers import Dense
from keras.wrappers.scikit_learn import KerasClassifier
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import make_scorer
import numpy
def create_XY_exp(Ndata):
num = np.arange(10, 310, 10)
denom = np.array([10, 80, 120])
A = np.exp(-np.outer(num, 1/denom))
X = np.zeros((Ndata, len(num)))
Y = np.zeros((Ndata, len(denom)))
for ii in range(Ndata):
c = np.random.random((3))
si = np.dot(A, c)
e1 = np.random.normal(0, 0.03, len(num))
X[ii] = si+e1
Y[ii] = c
return X, Y
def create_model(optimizer='adam', init='random_uniform', nnodes=8):
# create model
model = Sequential()
model.add(Dense(nnodes, input_dim=30, kernel_initializer=init, activation='relu'))
model.add(Dense(nnodes, kernel_initializer=init, activation='relu'))
model.add(Dense(nnodes, kernel_initializer=init, activation='relu'))
model.add(Dense(3, kernel_initializer=init, activation='sigmoid'))
# Compile model
model.compile(loss='mean_squared_error', optimizer=optimizer, metrics=['accuracy'])
return model
X, Y = create_XY_exp(100)
# create model
model = KerasClassifier(build_fn=create_model, verbose=1)
nnodes = [30, 40]
param_grid = dict(nnodes=nnodes)
grid = GridSearchCV(estimator=model, param_grid=param_grid, error_score='raise')
grid_result = grid.fit(X, Y)
# summarize results
print("Best: %f using %s" % (grid_result.best_score_, grid_result.best_params_))
means = grid_result.cv_results_['mean_test_score']
stds = grid_result.cv_results_['std_test_score']
params = grid_result.cv_results_['params']
for mean, stdev, param in zip(means, stds, params):
print("\t%f (%f) with: %r" % (mean, stdev, param))
都大于0,但最终的acc:
为0。我认为mean_test_score
应该相对于mean_test_score
稍微靠近acc:
。运行。
Epoch 1/1
66/66 [==============================] - 2s 23ms/step - loss: 0.0761 - acc: 0.3182
34/34 [==============================] - 1s 19ms/step
66/66 [==============================] - 0s 48us/step
Epoch 1/1
67/67 [==============================] - 2s 23ms/step - loss: 0.0830 - acc: 0.3284
33/33 [==============================] - 1s 19ms/step
67/67 [==============================] - 0s 48us/step
Epoch 1/1
67/67 [==============================] - 2s 24ms/step - loss: 0.0789 - acc: 0.2836
33/33 [==============================] - 1s 20ms/step
67/67 [==============================] - 0s 49us/step
Epoch 1/1
66/66 [==============================] - 2s 24ms/step - loss: 0.0761 - acc: 0.2879
34/34 [==============================] - 1s 19ms/step
66/66 [==============================] - 0s 49us/step
Epoch 1/1
67/67 [==============================] - 2s 24ms/step - loss: 0.0830 - acc: 0.3433
33/33 [==============================] - 1s 21ms/step
67/67 [==============================] - 0s 50us/step
Epoch 1/1
67/67 [==============================] - 2s 25ms/step - loss: 0.0789 - acc: 0.3582
33/33 [==============================] - 1s 21ms/step
67/67 [==============================] - 0s 49us/step
Epoch 1/1
100/100 [==============================] - 2s 17ms/step - loss: 0.0794 - acc: 0.3000
Best: 0.000000 using {'nnodes': 30}
0.000000 (0.000000) with: {'nnodes': 30}
0.000000 (0.000000) with: {'nnodes': 40}