我试图在Julia中实现一个简单的正则化逻辑回归算法。我想使用Optim.jl库来最小化我的成本函数,但我无法使其工作。
我的成本函数和渐变如下:
function cost(X, y, theta, lambda)
m = length(y)
h = sigmoid(X * theta)
reg = (lambda / (2*m)) * sum(theta[2:end].^2)
J = (1/m) * sum( (-y).*log(h) - (1-y).*log(1-h) ) + reg
return J
end
function grad(X, y, theta, lambda, gradient)
m = length(y)
h = sigmoid(X * theta)
# gradient = zeros(size(theta))
gradient = (1/m) * X' * (h - y)
gradient[2:end] = gradient[2:end] + (lambda/m) * theta[2:end]
return gradient
end
(其中theta
是假设函数的参数向量,lambda
是正则化参数。)
然后,根据这里给出的说明:https://github.com/JuliaOpt/Optim.jl我试着像这样调用优化函数:
# those are handle functions I define to pass them as arguments:
c(theta::Vector) = cost(X, y, theta, lambda)
g!(theta::Vector, gradient::Vector) = grad(X, y, theta, lambda, gradient)
# then I do
optimize(c,some_initial_theta)
# or maybe
optimize(c,g!,initial_theta,method = :l_bfgs) # try a different algorithm
在这两种情况下,它表示它无法收敛,输出看起来有些尴尬:
julia> optimize(c,initial_theta)
Results of Optimization Algorithm
* Algorithm: Nelder-Mead
* Starting Point: [0.0,0.0,0.0,0.0,0.0]
* Minimum: [1.7787162051775145,3.4584135105727145,-6.659680628594007,4.776952006060713,1.5034743945407143]
* Value of Function at Minimum: -Inf
* Iterations: 1000
* Convergence: false
* |x - x'| < NaN: false
* |f(x) - f(x')| / |f(x)| < 1.0e-08: false
* |g(x)| < NaN: false
* Exceeded Maximum Number of Iterations: true
* Objective Function Calls: 1013
* Gradient Call: 0
julia> optimize(c,g!,initial_theta,method = :l_bfgs)
Results of Optimization Algorithm
* Algorithm: L-BFGS
* Starting Point: [0.0,0.0,0.0,0.0,0.0]
* Minimum: [-6.7055e-320,-2.235e-320,-6.7055e-320,-2.244e-320,-6.339759952602652e-7]
* Value of Function at Minimum: 0.693148
* Iterations: 1
* Convergence: false
* |x - x'| < 1.0e-32: false
* |f(x) - f(x')| / |f(x)| < 1.0e-08: false
* |g(x)| < 1.0e-08: false
* Exceeded Maximum Number of Iterations: false
* Objective Function Calls: 75
* Gradient Call: 75
我的方法(来自我的第一个代码清单)是不正确的?或者我滥用Optim.jl功能?无论哪种方式,在这里定义和最小化成本函数的正确方法是什么?
这是我第一次与朱莉娅在一起,可能我做了一件非常糟糕的事,但我无法确切地说出究竟是什么。任何帮助将不胜感激!
X
和y
是训练集,X
是90x5矩阵,y
是90x1向量(即,我的训练集取自 Iris - 我认为这不重要。)
答案 0 :(得分:4)
以下是使用Optim.jl的自动分化功能的非正规逻辑回归的示例。它可能会帮助您实现自己的实现。
using Optim
const X = rand(100, 3)
const true_β = [5,2,4]
const true_y = 1 ./ (1 + exp(-X*true_β))
function objective(β)
y = 1 ./ (1 + exp(-X*β))
return sum((y - true_y).^2) # Use SSE, non-standard for log. reg.
end
println(optimize(objective, [3.0,3.0,3.0],
autodiff=true, method=LBFGS()))
哪个给了我
Results of Optimization Algorithm
* Algorithm: L-BFGS
* Starting Point: [3.0,3.0,3.0]
* Minimizer: [4.999999945789497,1.9999999853962256,4.0000000047769495]
* Minimum: 0.000000
* Iterations: 14
* Convergence: true
* |x - x'| < 1.0e-32: false
* |f(x) - f(x')| / |f(x)| < 1.0e-08: false
* |g(x)| < 1.0e-08: true
* Exceeded Maximum Number of Iterations: false
* Objective Function Calls: 53
* Gradient Call: 53
答案 1 :(得分:4)
下面你会找到使用闭包和currying的Logistic回归的成本和梯度计算函数(适用于那些习惯于返回成本和渐变的函数的版本):
function cost_gradient(θ, X, y, λ)
m = length(y)
return (θ::Array) -> begin
h = sigmoid(X * θ)
J = (1 / m) * sum(-y .* log(h) .- (1 - y) .* log(1 - h)) + λ / (2 * m) * sum(θ[2:end] .^ 2)
end, (θ::Array, storage::Array) -> begin
h = sigmoid(X * θ)
storage[:] = (1 / m) * (X' * (h .- y)) + (λ / m) * [0; θ[2:end]]
end
end
Sigmoid函数实现:
sigmoid(z) = 1.0 ./ (1.0 + exp(-z))
要在Optim.jl中应用cost_gradient
,请执行以下操作:
using Optim
#...
# Prerequisites:
# X size is (m,d), where d is the number of training set features
# y size is (m,1)
# λ as the regularization parameter, e.g 1.5
# ITERATIONS number of iterations, e.g. 1000
X=[ones(size(X,1)) X] #add x_0=1.0 column; now X size is (m,d+1)
initialθ = zeros(size(X,2),1) #initialTheta size is (d+1, 1)
cost, gradient! = cost_gradient(initialθ, X, y, λ)
res = optimize(cost, gradient!, initialθ, method = ConjugateGradient(), iterations = ITERATIONS);
θ = Optim.minimizer(res);
现在,您可以轻松预测(例如训练集验证):
predictions = sigmoid(X * θ) #X size is (m,d+1)
尝试我的方法或将其与您的实施进行比较。