Scipy最小化不执行迭代

时间:2018-07-03 11:37:38

标签: python optimization scipy

我正在尝试使用SciPy优化函数最小化函数来最小化,但是当我运行它时,结果证明算法不执行迭代,也没有进行优化。我正在加载的数据的大小为(117,3)。运行程序时我没有收到任何错误或警告,但是最小化函数的结果显示:

消息:“由于精度损失,不一定实现所需的错误。”
nfev:113
尼特:0
涅夫:101
状态:2
成功:错误

可能是什么问题?我在下面附加了我的代码:

data = pd.read_csv('ex2data2.txt')
data = data.values
m = data.shape[0]
X = data[:,0:2]
y = np.c_[data[:,2]]

X_pol = extend_to_degree(X[:,0], X[:,1], 6)
initial_theta = np.zeros(X_pol.shape[1])
lamb = 1 # Regularization parameter
import scipy.optimize as opt
res = opt.minimize(cost_function_reg, x0 = initial_theta, args=(lamb, X_pol, y), method=None, jac=gradient, options={'maxiter':400})



def sigmoid(x):
    return 1/(1+np.exp(x))

def cost_function(theta, X, y):
    m = X.shape[0]
    sum = 0
    for i in range(0, m):
        xi = X[i,:]
        yi = y[i]
        h = sigmoid(xi.dot(theta))
        if(yi == 0):
            sum = sum - np.log(1-h)
        elif(yi == 1):
            sum = sum - np.log(h)
    sum = sum/m
    return sum

def cost_function_reg(theta, lamb, X, y):
    m = X.shape[0]
    sum = cost_function(theta, X, y)
    theta[0] = 0 # Do not regularize theta_0.
    sum = sum + lamb/(2*m)*np.sum(np.power(theta,2))
    return (sum.flatten())

def gradient(theta, lamb, X, y):
    n = theta.shape[0]
    m = X.shape[0]
    sum = np.zeros((n))
    for i in range(0,m):
        xi = X[i,:]
        yi = y[i]
        h = sigmoid(xi.dot(theta))
        sum = np.add(sum, (h-yi)*xi)
    sum = sum + lamb/m*theta
    return sum

def predict(theta, X, threshold = 0.5):
    res = sigmoid(X.dot(theta)) >= threshold
    return res

def extend_to_degree(X1, X2, degree = 6):
    m = X1.shape[0]
    out = np.ones((m, 1))
    for i in range(1,degree+1):
        for j in range(0,i+1):
            out = np.column_stack([out, np.multiply(np.power(X1,i-j), np.power(X2, j))])
    return out

0 个答案:

没有答案