scipy.optimize.minimize函数与L-BFGS-B方法maxiter属性不起作用

时间:2017-06-13 07:58:15

标签: python optimization scipy

我有一个简单的成本函数,我想使用scipy.optimize.minimize函数进行优化。

opt_solution  = scipy.optimize.minimize(costFunction, theta, args = (training_data,), method = 'L-BFGS-B', jac = True, options = {'maxiter': 100)

其中costFunction是要优化的函数,theta是要优化的参数。在costFunction内,我打印了成本函数的值。但参数maxiter似乎没有影响我是否将值从10增加到100000.它所花费的时间是相同的。另外,我期望成本函数的打印值应该等于maxiter的值。所以我感觉maxiter没有效果。可能是什么问题? 成本函数是

def costFunction(self, theta, input):

    """ Extract weights and biases from 'theta' input """

    W1 = theta[self.limit0 : self.limit1].reshape(self.hidden_size, self.visible_size)
    W2 = theta[self.limit1 : self.limit2].reshape(self.visible_size, self.hidden_size)
    b1 = theta[self.limit2 : self.limit3].reshape(self.hidden_size, 1)
    b2 = theta[self.limit3 : self.limit4].reshape(self.visible_size, 1)

    """ Compute output layers by performing a feedforward pass
        Computation is done for all the training inputs simultaneously """

    hidden_layer = self.sigmoid(numpy.dot(W1, input) + b1)
    output_layer = self.sigmoid(numpy.dot(W2, hidden_layer) + b2)

    """ Compute intermediate difference values using Backpropagation algorithm """

    diff = output_layer - input
    sum_of_squares_error = 0.5 * numpy.sum(numpy.multiply(diff, diff)) / input.shape[1]
    weight_decay         = 0.5 * self.lamda * (numpy.sum(numpy.multiply(W1, W1)) + numpy.sum(numpy.multiply(W2, W2)))
    cost                 = sum_of_squares_error + weight_decay 

    """ Compute the gradient values by averaging partial derivatives
        Partial derivatives are averaged over all training examples """

    W1_grad = numpy.dot(del_hid, numpy.transpose(input))
    W2_grad = numpy.dot(del_out, numpy.transpose(hidden_layer))
    b1_grad = numpy.sum(del_hid, axis = 1)
    b2_grad = numpy.sum(del_out, axis = 1)

    W1_grad = W1_grad / input.shape[1] + self.lamda * W1
    W2_grad = W2_grad / input.shape[1] + self.lamda * W2
    b1_grad = b1_grad / input.shape[1]
    b2_grad = b2_grad / input.shape[1]

    """ Transform numpy matrices into arrays """

    W1_grad = numpy.array(W1_grad)
    W2_grad = numpy.array(W2_grad)
    b1_grad = numpy.array(b1_grad)
    b2_grad = numpy.array(b2_grad)

    """ Unroll the gradient values and return as 'theta' gradient """

    theta_grad = numpy.concatenate((W1_grad.flatten(), W2_grad.flatten(),
                                    b1_grad.flatten(), b2_grad.flatten()))
    # Update counter value
    self.counter += 1                                
    print "Index ", self.counter, "cost ", cost
    return [cost, theta_grad]

1 个答案:

答案 0 :(得分:1)

maxiter给出了scipy在放弃改进解决方案之前尝试的最大迭代次数。但它可能很好地满足于解决方案,并提前停止。

如果您查看the docs for minimize when using the 'l-bfgs-b' method,请注意您可以将三个参数作为选项(factrftolgtol)传递,这些参数也会导致迭代停止

在像你这样的简单情况下,特别是如果你的成本函数也提供了渐变(如你的调用中jac=True所示),收敛通常发生在前几次迭代中,因此在maxiter之前达到了限制。

相关问题