朱莉娅的最大似然

时间:2018-06-13 22:20:41

标签: julia

我正在尝试使用最大似然估计Julia中的正态线性模型。我使用以下代码来模拟流程,只需使用一个拦截和一个匿名函数,根据Optim文档关于不会改变的值:

using Optim

nobs = 500
nvar = 1
β = ones(nvar)*3.0
x = [ones(nobs) randn(nobs,nvar-1)]
ε = randn(nobs)*0.5
y = x*β + ε

function LL_anon(X, Y, β, σ)
  -(-length(X)*log(2π)/2 - length(X)*log(σ) - (sum((Y - X*β).^2) / (2σ^2)))
end
LL_anon(X,Y, pars) = LL_anon(X,Y, pars...)

res2 = optimize(vars -> LL_anon(x,y, vars...), [1.0,1.0]) # Start values: β=1.0, σ=1.0

这实际上恢复了参数,我收到了以下输出:

 * Algorithm: Nelder-Mead
 * Starting Point: [1.0,1.0]
 * Minimizer: [2.980587812647935,0.5108406803949835]
 * Minimum: 3.736217e+02
 * Iterations: 47
 * Convergence: true
   *  √(Σ(yᵢ-ȳ)²)/n < 1.0e-08: true
   * Reached Maximum Number of Iterations: false
 * Objective Calls: 92

但是,当我尝试设置nvar = 2,即拦截加上一个额外的协变量时,我收到以下错误信息:

MethodError: no method matching LL_anon(::Array{Float64,2}, ::Array{Float64,1}, ::Float64, ::Float64, ::Float64)
Closest candidates are:
  LL_anon(::Any, ::Any, ::Any, ::Any) at In[297]:2
  LL_anon(::Array{Float64,1}, ::Array{Float64,1}, ::Any, ::Any) at In[113]:2
  LL_anon(::Any, ::Any, ::Any) at In[297]:4
  ...

Stacktrace:
 [1] (::##245#246)(::Array{Float64,1}) at .\In[299]:1
 [2] value!!(::NLSolversBase.NonDifferentiable{Float64,Array{Float64,1},Val{false}}, ::Array{Float64,1}) at C:\Users\dolacomb\.julia\v0.6\NLSolversBase\src\interface.jl:9
 [3] initial_state(::Optim.NelderMead{Optim.AffineSimplexer,Optim.AdaptiveParameters}, ::Optim.Options{Float64,Void}, ::NLSolversBase.NonDifferentiable{Float64,Array{Float64,1},Val{false}}, ::Array{Float64,1}) at C:\Users\dolacomb\.julia\v0.6\Optim\src\multivariate/solvers/zeroth_order\nelder_mead.jl:136
 [4] optimize(::NLSolversBase.NonDifferentiable{Float64,Array{Float64,1},Val{false}}, ::Array{Float64,1}, ::Optim.NelderMead{Optim.AffineSimplexer,Optim.AdaptiveParameters}, ::Optim.Options{Float64,Void}) at C:\Users\dolacomb\.julia\v0.6\Optim\src\multivariate/optimize\optimize.jl:25
 [5] #optimize#151(::Array{Any,1}, ::Function, ::Tuple{##245#246}, ::Array{Float64,1}) at C:\Users\dolacomb\.julia\v0.6\Optim\src\multivariate/optimize\interface.jl:62
 [6] #optimize#148(::Array{Any,1}, ::Function, ::Function, ::Array{Float64,1}) at C:\Users\dolacomb\.julia\v0.6\Optim\src\multivariate/optimize\interface.jl:52
 [7] optimize(::Function, ::Array{Float64,1}) at C:\Users\dolacomb\.julia\v0.6\Optim\src\multivariate/optimize\interface.jl:52

我不确定为什么添加一个额外的变量导致了这个问题,但它似乎是一个类型不稳定问题。

第二个问题是,当我使用原始工作示例并将起始值设置为[2.0,2.0]时,我收到以下错误消息:

log will only return a complex result if called with a complex argument. Try log(complex(x)).

Stacktrace:
 [1] nan_dom_err at .\math.jl:300 [inlined]
 [2] log at .\math.jl:419 [inlined]
 [3] LL_anon(::Array{Float64,2}, ::Array{Float64,1}, ::Float64, ::Float64) at .\In[302]:2
 [4] (::##251#252)(::Array{Float64,1}) at .\In[304]:1
 [5] value(::NLSolversBase.NonDifferentiable{Float64,Array{Float64,1},Val{false}}, ::Array{Float64,1}) at C:\Users\dolacomb\.julia\v0.6\NLSolversBase\src\interface.jl:19
 [6] update_state!(::NLSolversBase.NonDifferentiable{Float64,Array{Float64,1},Val{false}}, ::Optim.NelderMeadState{Array{Float64,1},Float64,Array{Float64,1}}, ::Optim.NelderMead{Optim.AffineSimplexer,Optim.AdaptiveParameters}) at C:\Users\dolacomb\.julia\v0.6\Optim\src\multivariate/solvers/zeroth_order\nelder_mead.jl:193
 [7] optimize(::NLSolversBase.NonDifferentiable{Float64,Array{Float64,1},Val{false}}, ::Array{Float64,1}, ::Optim.NelderMead{Optim.AffineSimplexer,Optim.AdaptiveParameters}, ::Optim.Options{Float64,Void}, ::Optim.NelderMeadState{Array{Float64,1},Float64,Array{Float64,1}}) at C:\Users\dolacomb\.julia\v0.6\Optim\src\multivariate/optimize\optimize.jl:51
 [8] optimize(::NLSolversBase.NonDifferentiable{Float64,Array{Float64,1},Val{false}}, ::Array{Float64,1}, ::Optim.NelderMead{Optim.AffineSimplexer,Optim.AdaptiveParameters}, ::Optim.Options{Float64,Void}) at C:\Users\dolacomb\.julia\v0.6\Optim\src\multivariate/optimize\optimize.jl:25
 [9] #optimize#151(::Array{Any,1}, ::Function, ::Tuple{##251#252}, ::Array{Float64,1}) at C:\Users\dolacomb\.julia\v0.6\Optim\src\multivariate/optimize\interface.jl:62

同样,我不确定为什么会发生这种情况,因为起始值非常重要,我想知道如何克服这个问题,并且它们与真正的价值观并没有太大差距。

非常感谢任何帮助!

2 个答案:

答案 0 :(得分:1)

Splatting会导致问题。例如。它将[1, 2, 3]转换为三个参数,而您的函数只接受两个参数。

使用以下电话:

res2 = optimize(vars -> LL_anon(x,y, vars[1:end-1], vars[end]), [1.0,1.0,1.0])

您可以从代码中删除以下行

LL_anon(X,Y, pars) = LL_anon(X,Y, pars...)

或将其替换为:

LL_anon(X,Y, pars) = LL_anon(X,Y, pars[1:end-1], pars[end])

但除非您将呼叫更改为:

,否则优化例程不会调用它
res2 = optimize(vars -> LL_anon(x,y, vars), [1.0,1.0,1.0])

最后 - 为了获得这段代码的良好性能,我建议将它全部包装在一个函数中。

编辑:现在我看到第二个问题。原因是σ在优化过程中可能变为负数,然后log(σ)失败。在这种情况下,最简单的方法是采用log(abs(σ)))这样:

function LL_anon(X, Y, β, σ)
    -(-length(X)*log(2π)/2 - length(X)*log(abs(σ)) - (sum((Y - X*β).^2) / (2σ^2)))
end

当然,您必须将σ的绝对值作为解决方案,因为您可能会从优化例程中获得负值。

更清洁的方法是优化例如log(σ)不是σ这样:

function LL_anon(X, Y, β, logσ)
    -(-length(X)*log(2π)/2 - length(X)*logσ - (sum((Y - X*β).^2) / (2(exp(logσ))^2)))
end

但是您必须在优化完成后使用exp(logσ)来恢复σ

答案 1 :(得分:0)

我已就此问过,并有其他选择。看待这个问题的主要原因是双重的。一,学习如何在规范的情况下使用Julia中的优化程序和两个,将其扩展到空间计量经济模型。考虑到这一点,我将发布Julia留言板中的其他建议代码,以便其他人可以看到另一种解决方案。

using Optim

nobs = 500
nvar = 2
β = ones(nvar) * 3.0
x = [ones(nobs) randn(nobs, nvar - 1)]
ε = randn(nobs) * 0.5
y = x * β + ε

function LL_anon(X, Y, β, log_σ)
    σ = exp(log_σ)
    -(-length(X) * log(2π)/2 - length(X) * log(σ) - (sum((Y - X * β).^2) / (2σ^2)))
end

opt = optimize(vars -> LL_anon(x,y, vars[1:nvar], vars[nvar + 1]),
               ones(nvar+1))

# Use forward autodiff to get first derivative, then optimize
fun1 = OnceDifferentiable(vars -> LL_anon(x, y, vars[1:nvar], vars[nvar + 1]),
                         ones(nvar+1))
opt1 = optimize(fun1, ones(nvar+1))

Results of Optimization Algorithm

Algorithm: L-BFGS
Starting Point: [1.0,1.0,1.0]
Minimizer: [2.994204150985705,2.9900626550063305, …]
Minimum: 3.538340e+02
Iterations: 12
Convergence: true
|x - x’| ≤ 1.0e-32: false
|x - x’| = 8.92e-12
|f(x) - f(x’)| ≤ 1.0e-32 |f(x)|: false
|f(x) - f(x’)| = 9.64e-16 |f(x)|
|g(x)| ≤ 1.0e-08: true
|g(x)| = 6.27e-09
Stopped by an increasing objective: true
Reached Maximum Number of Iterations: false
Objective Calls: 50
Gradient Calls: 50

opt1.minimizer
3-element Array{Float64,1}:
  2.9942 
  2.99006
 -1.0651  #Note: needs to be exponentiated

# Get Hessian, use Newton!
fun2 = TwiceDifferentiable(vars -> LL_anon(x, y, vars[1:nvar], vars[nvar + 1]),
                           ones(nvar+1))
opt2 = optimize(fun2, ones(nvar+1))

Results of Optimization Algorithm

Algorithm: Newton’s Method
Starting Point: [1.0,1.0,1.0]
Minimizer: [2.99420415098702,2.9900626550079026, …]
Minimum: 3.538340e+02
Iterations: 9
Convergence: true
|x - x’| ≤ 1.0e-32: false
|x - x’| = 1.36e-11
|f(x) - f(x’)| ≤ 1.0e-32 |f(x)|: false
|f(x) - f(x’)| = 1.61e-16 |f(x)|
|g(x)| ≤ 1.0e-08: true
|g(x)| = 6.27e-09
Stopped by an increasing objective: true
Reached Maximum Number of Iterations: false
Objective Calls: 45
Gradient Calls: 45
Hessian Calls: 9
fieldnames(fun2)

13-element Array{Symbol,1}:
 :f       
 :df      
 :fdf     
 :h       
 :F       
 :DF      
 :H       
 :x_f     
 :x_df    
 :x_h     
 :f_calls 
 :df_calls
 :h_calls 

opt2.minimizer
3-element Array{Float64,1}:
  2.98627
  3.00654
 -1.11313

numerical_hessian = (fun2.H) #.H is the numerical Hessian
3×3 Array{Float64,2}:
 64.8715      -9.45045      0.000121521
 -0.14568     66.4507       0.0        
  1.87326e-6   4.10675e-9  44.7214   

从这里,可以使用数值Hessian来获得估计的标准误差,并为自己的函数形成t-统计等。

再次感谢您提供答案,我希望人们发现这些信息非常有用。