我尝试调整超参数
def create_model():
# create model
model = Sequential()
model.add(Dense(12, input_dim=10, kernel_initializer='uniform', activation='relu'))
model.add(Dense(4, kernel_initializer='uniform', activation='sigmoid'))
# Compile model
model.compile(loss='categorical_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
return model
# create model
model = KerasClassifier(build_fn=create_model, verbose=0)
cv = KFold(n_splits=5, shuffle=True, random_state=1)
# grid search epochs, batch size and optimizer
epochs = [ 100, 150, 170, 200, 250]
batches = [ 5, 10, 20, 25, 30]
param_grid = dict(epochs=epochs, batch_size=batches)
grid = GridSearchCV(estimator=model, param_grid=param_grid, cv = cv)
然后适合
%%time
grid_result = grid.fit(X_train, y_train)
我得到了
grid_result.best_params_, grid_result.best_score_
({'batch_size': 10, 'epochs': 150}, 0.31568432165371191)
之后我将完全相同并获得
grid_result.best_params_, grid_result.best_score_
({'batch_size': 20, 'epochs': 100}, 0.31368631761629029)
有什么问题?数据集的大小
X_train.shape, y_train.shape
((1001, 10), (1001, 4))
我认为我们必须使用random_state
获得相同的结果,但KerasClassifier()
没有random_state