kera交叉验证策略?

时间:2018-02-18 23:48:26

标签: python scikit-learn keras cross-validation grid-search

我正在使用Keras包装器进行Scikitlearn网格搜索。我的问题是:当我在fit方法中指定验证数据并且指定使用cv = 3默认值时,那么keras GridSeacrhCV将如何进行交叉验证?它是否仍会在交叉验证的训练数据中进行3次折叠,还是会使用我已经指定的验证数据3次进行验证?

enter image description here

def grid_create_model(learn_rate=0.1, momentum=0,  optimizer="adam", activation ="softmax", dropout_rate=0.0, weight_constraint=0):
model = Sequential()

if optimizer in ["sgd"]:
    optimizer = SGD(lr=learn_rate, momentum=momentum) 
if optimizer in ["rmsprop'"]:
    optimizer = RMSprop(lr=learn_rate)
if optimizer in ['adam']:
    optimizer = Adam(lr=learn_rate, beta_1=0.9, beta_2=0.999, epsilon=None, decay=0.0, amsgrad=False)



model.add(Dense(units = 200,input_dim =indim, kernel_initializer= 'normal', activation=activation))
model.add(Dropout(dropout_rate))
model.add(Dense(units=outdim, activation = "softmax"))
model.compile(loss="categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
return model    
iterations     =  int(parser[section_name]['iterations'])
    epochs         =  [int(parser[section_name]['epochs'])]
    learn_rate     =  list(map(float,[e.strip() for e in parser[section_name]['lr'].split(',')]))
    batch_size     =  list(map(int,[e.strip() for e in parser[section_name]['b'].split(',')]))
     param_grid = dict(  learn_rate = learn_rate, batch_size =batch_size, epochs =epochs, optimizer=optimizer,activation=activation,weight_constraint=weight_constraint,dropout_rate=dropout_rate)
    grid = GridSearchCV(estimator = model, param_grid = param_grid, n_jobs=1,verbose=1, cv=[(slice(None), slice(None))], refit= True)
 grid_result = grid.fit(X_train, y_train,validation_data=(X_trainTest,y_trainTest), log_dir='./Graph', loss= training_lossHistory)

0 个答案:

没有答案