如何为最大似然估计python实现小批量梯度下降?

时间:2019-04-10 17:05:48

标签: python gradient-descent mle log-likelihood mini-batch

当前,我编写了一些代码来查找参数组合,从而使某些字段数据的对数似然函数最大化。现在,模型从均匀分布中随机选择参数,然后选择最佳参数集。但是,此方法无法提供可重复的结果。我想知道是否有人有示例代码可用于更新此代码以运行小批量梯度下降(或上升)方法以找到最佳参数集?

此外,该模型现在一次使用所有字段数据,而我希望在每次迭代中测试一小批数据。下面是我的代码:

from math import *
import numpy as np
import random
import matplotlib.pyplot as plt

def ff(xs,vx,Dx,XSpill,sigmax0,SP):
    mux = XSpill + vx*SP
    sigmax = sigmax0 + np.sqrt(2.0*Dx*SP)
    return (1.0/(np.sqrt(2.0*np.pi*(sigmax**2.0))))*np.exp((-(x-mux)**2.0)/(2.0*(sigmax**2.0)))

def likelihood(xs,XSpill,ConData,sigmax0,SP):
    vx1 = [random.uniform(-50.0,50.0) for i in range(1000)]
    Dx1 = [random.uniform(0.01,20.1) for i in range(1000)]

    IniIndLikelihood = np.ones([len(xs),len(vx1)])
    Lamda = np.zeros(shape=(len(vx1)))
    Prob = np.zeros(shape=(len(vx1)))
    for ci in range(len(xs)):
        if ConData[ci] > 0.0:
            for i in range(len(vx1)):
            Prob[i] = ff(xs[ci],vx1[i],Dx1[i],XSpill,sigmax0,SP)
            if Prob[i] > 1e-308:
                Lamda[i] = 1/Prob[i]
                IniIndLikelihood[ci,i] = Lamda[i]*np.exp(-Lamda[i]*ConData[ci])
            else:
                Lamda[i] = 0.0
                IniIndLikelihood[ci,i] = 0.0

    CompLikelihood = np.ones([len(vx1)])
    Likelihood = np.zeros([len(vx1)])
    for i in range(len(vx1)):
        for ci in range(len(xs)):
            if ConData[ci] > 0.0:
                if IniIndLikelihood[ci,i] == 0.0:
                    CompLikelihood[i] = 0.0

    MaxLogLike = -22.0
    for i in range(len(vx1)):
        for ci in range(len(xs)):
            if ConData[ci] > 0.0:
                if CompLikelihood[i] == 1.0:
                    Likelihood[i] = Likelihood[i] + np.log(IniIndLikelihood[ci,i])

        if CompLikelihood[i] == 1.0:
            if MaxLogLike == -22.0:
                MaxLogLike = Likelihood[i]
            else:
                MaxLogLike = np.max([MaxLogLike,Likelihood[i]])

    for i in range(len(vx1)):
        if CompLikelihood[i] == 1.0:
            Likelihood[i] = Likelihood[i] - MaxLogLike

    return Likelihood

if __name__ == "__main__":

    sigmax0 = 0.0
    XSpill = 0.0
    SP = 1.0
    xs = [1,3,5,9,20,34,40,60]
    ConData = np.array([5,7,30,5,5,15,30,5])/100

    Result = likelihood(xs,XSpill,ConData,sigmax0,SP)

其中xs是位置,ConData是现场数据的集中度。然后,在获得对数似然的概率后,我使用argmax来找到最佳的参数组合。

由于我无法找到MLE方法的Python示例,因此任何建议,链接或示例代码都将有所帮助!

0 个答案:

没有答案