仅在max_iter之后停止训练MLPRegressor(solver = lbfgs),而不是因为“ tol”

时间:2019-05-05 02:52:22

标签: python scikit-learn neural-network

我正在使用求解器MLPRegressor使用lbfgs训练模型。我已将max_iter参数从默认值200更改为500。我想强制训练继续进行到500次迭代,并且在损失至少没有改善tol时也不要停止。

我已经尝试将tol设置为0.0,然后继续将其设置为负数(例如-10)

mymodel = mlpr(hidden_layer_sizes=(3,), activation = 'tanh', solver = 
'lbfgs',max_iter=500, tol=0.0, verbose=True)
for i in range(99):
    mymodel = mymodel.fit(xtrain,ytrain)
    print("The number of iterations ran was: ",mymodel.n_iter_)

这就是我得到的:

The number of iterations ran was:  56
The number of iterations ran was:  162
The number of iterations ran was:  154 
The number of iterations ran was:  169
The number of iterations ran was:  127
The number of iterations ran was:  40
The number of iterations ran was:  501
The number of iterations ran was:  501
The number of iterations ran was:  502
The number of iterations ran was:  198

我希望每次都能进行500次迭代。 (甚至不超过501或502,因为它们超过了我在max_iter中指定的500)

1 个答案:

答案 0 :(得分:1)

tol参数指定优化的公差。如果损失或分数至少没有改善tol,则在达到收敛时,训练被视为结束。尝试将tol的参数设置为None,因为它指示-infinity,所以直到达到max_iter时训练才会停止。

mymodel = mlpr(hidden_layer_sizes=(3,), activation = 'tanh', solver = 
'lbfgs',max_iter=500, tol=None, verbose=True)