使用sklearn python初始化GMM

时间:2017-07-04 19:29:51

标签: python scikit-learn

我希望创建一个sklearn GMM对象,其中包含一组预定义的均值,权重和协方差(在网格上)。

我设法做到了:

from sklearn.mixture import GaussianMixture
import numpy as np


def get_grid_gmm(subdivisions=[10,10,10], variance=0.05 ):
    n_gaussians = reduce(lambda x, y: x*y,subdivisions)
    step = [ 1.0/(2*subdivisions[0]),  1.0/(2*subdivisions[1]),  1.0/(2*subdivisions[2])]

    means = np.mgrid[ step[0] : 1.0-step[0]: complex(0,subdivisions[0]),
                      step[1] : 1.0-step[1]: complex(0,subdivisions[1]),
                      step[2] : 1.0-step[2]: complex(0,subdivisions[2])]
    means = np.reshape(means,[-1,3])
    covariances = variance*np.ones_like(means)
    weights = (1.0/n_gaussians)*np.ones(n_gaussians)
    gmm = GaussianMixture(n_components=n_gaussians, covariance_type='spherical' )
    gmm.weights_ = weights
    gmm.covariances_ = covariances
    gmm.means_ = means
    return gmm

def main():
    xx = np.random.rand(100,3)
    gmm = get_grid_gmm()
    y= gmm.predict_proba(xx)

if __name__ == "__main__":
    main()

问题是它缺少我稍后需要使用的gmm.predict_proba()方法。 我怎么能克服这个?

更新:我将代码更新为显示错误的完整示例

UPDATE2

我根据评论和答案更新了代码

from sklearn.mixture import GaussianMixture
import numpy as np


def get_grid_gmm(subdivisions=[10,10,10], variance=0.05 ):
    n_gaussians = reduce(lambda x, y: x*y,subdivisions)
    step = [ 1.0/(2*subdivisions[0]),  1.0/(2*subdivisions[1]),  1.0/(2*subdivisions[2])]

    means = np.mgrid[ step[0] : 1.0-step[0]: complex(0,subdivisions[0]),
                      step[1] : 1.0-step[1]: complex(0,subdivisions[1]),
                      step[2] : 1.0-step[2]: complex(0,subdivisions[2])]
    means = np.reshape(means,[3,-1])
    covariances = variance*np.ones(n_gaussians)
    cov_type = 'spherical'
    weights = (1.0/n_gaussians)*np.ones(n_gaussians)
    gmm = GaussianMixture(n_components=n_gaussians, covariance_type=cov_type )
    gmm.weights_ = weights
    gmm.covariances_ = covariances
    gmm.means_ = means
    from sklearn.mixture.gaussian_mixture import _compute_precision_cholesky
    gmm.precisions_cholesky_ = _compute_precision_cholesky(covariances, cov_type)
    gmm.precisions_ = gmm.precisions_cholesky_ ** 2
    return gmm

def main():
    xx = np.random.rand(100,3)
    gmm = get_grid_gmm()
    _, y = gmm._estimate_log_prob(xx)
    y = np.exp(y)

if __name__ == "__main__":
    main()

没有更多错误,但_estimate_log_prob和predict_proba对于拟合的GMM不会产生相同的结果。为什么会这样?

1 个答案:

答案 0 :(得分:1)

由于您没有训练模型,只是使用该功能进行估算,因此您不需要使用该对象,但您可以使用它们在引擎盖下使用的相同功能。你可以试试_estimate_log_gaussian_prob。这就是我们认为他们做的内容。

看一下来源:

特别是在基类 https://github.com/scikit-learn/scikit-learn/blob/ab93d657eb4268ac20c4db01c48065b5a1bfe80d/sklearn/mixture/base.py#L342

正在调用特定方法,而该方法又调用一个函数 https://github.com/scikit-learn/scikit-learn/blob/ab93d657eb4268ac20c4db01c48065b5a1bfe80d/sklearn/mixture/gaussian_mixture.py#L671