高斯混合模型的EM算法的实现

时间:2015-08-02 16:43:07

标签: algorithm matlab mixture-model

使用EM算法,我想在给定数据集上训练具有四个分量的高斯混合模型。该集是三维的,包含300个样本。

问题在于,在大约6轮EM算法之后,根据matlab(rank(sigma) = 2而不是3),协方差矩阵sigma变得接近于奇异。这反过来会导致不希望的结果,例如评估高斯分布的复杂值gm(k,i)

此外,我使用高斯的对数来解释下溢问题 - 参见E步骤。我不确定这是否正确,如果我必须在其他地方采取责任p(w_k | x ^(i),theta)的exp?

你能告诉我到目前为止我的EM算法实现是否正确吗? 如何解决接近奇异协方差西格玛的问题?

以下是我对EM算法的实现:

首先,我使用kmeans 初始化组件的均值和协方差:

load('data1.mat');

X = Data'; % 300x3 data set
D = size(X,2); % dimension
N = size(X,1); % number of samples
K = 4; % number of Gaussian Mixture components

% Initialization
p = [0.2, 0.3, 0.2, 0.3]; % arbitrary pi
[idx,mu] = kmeans(X,K); % initial means of the components

% compute the covariance of the components
sigma = zeros(D,D,K);
for k = 1:K
    sigma(:,:,k) = cov(X(idx==k,:));
end

对于 E-step ,我使用以下公式计算责任。

responsibility

w_k是k高斯分量。

x ^(i)是单个数据点(样本)

theta代表高斯混合模型的参数:mu,Sigma,pi。

以下是相应的代码:

% variables for convergence 
converged = 0;
prevLoglikelihood = Inf;
prevMu = mu;
prevSigma = sigma;
prevPi = p;
round = 0;
while (converged ~= 1)
    round = round +1
    gm = zeros(K,N); % gaussian component in the nominator
    sumGM = zeros(N,1); % denominator of responsibilities
    % E-step:  Evaluate the responsibilities using the current parameters
    % compute the nominator and denominator of the responsibilities
    for k = 1:K
        for i = 1:N
             Xmu = X-mu;
             % I am using log to prevent underflow of the gaussian distribution (exp("small value"))
             logPdf = log(1/sqrt(det(sigma(:,:,k))*(2*pi)^D)) + (-0.5*Xmu*(sigma(:,:,k)\Xmu'));
             gm(k,i) = log(p(k)) * logPdf;
             sumGM(i) = sumGM(i) + gm(k,i);
         end
    end

    % calculate responsibilities
    res = zeros(K,N); % responsibilities
    Nk = zeros(4,1);
    for k = 1:K
        for i = 1:N
            % I tried to use the exp(gm(k,i)/sumGM(i)) to compute res but this leads to sum(pi) > 1.
            res(k,i) = gm(k,i)/sumGM(i);
        end
        Nk(k) = sum(res(k,:));
    end

Nk(k)使用M步骤中给出的公式计算,并在M步骤中用于计算新概率p(k)

M-步骤

reestimate parameters using current responsibilities

    % M-step: Re-estimate the parameters using the current responsibilities
    for k = 1:K
        for i = 1:N
            mu(k,:) = mu(k,:) + res(k,i).*X(k,:);
            sigma(:,:,k) = sigma(:,:,k) + res(k,i).*(X(k,:)-mu(k,:))*(X(k,:)-mu(k,:))';
        end
        mu(k,:) = mu(k,:)./Nk(k);
        sigma(:,:,k) = sigma(:,:,k)./Nk(k);
        p(k) = Nk(k)/N;
    end

现在,为了检查收敛性,使用以下公式计算对数似然:

log-likelihood

    % Evaluate the log-likelihood and check for convergence of either 
    % the parameters or the log-likelihood. If not converged, go to E-step.
    loglikelihood = 0;
    for i = 1:N
        loglikelihood = loglikelihood + log(sum(gm(:,i)));
    end


    % Check for convergence of parameters
    errorLoglikelihood = abs(loglikelihood-prevLoglikelihood);
    if (errorLoglikelihood <= eps)
        converged = 1; 
    end

    errorMu = abs(mu(:)-prevMu(:));
    errorSigma = abs(sigma(:)-prevSigma(:));
    errorPi = abs(p(:)-prevPi(:));

    if (all(errorMu <= eps) && all(errorSigma <= eps) && all(errorPi <= eps))
        converged = 1;
    end

    prevLoglikelihood = loglikelihood;
    prevMu = mu;
    prevSigma = sigma;
    prevPi = p;

end % while 

我的Matlab实现高斯混合模型的EM算法有什么问题吗?

以前的麻烦:

问题是我无法使用对数似然检查收敛,因为它是-Inf。这是在舍入零值的同时评估责任公式中的高斯值(参见E步骤)。

你能告诉我到目前为止我的EM算法实现是否正确吗? 以及如何使用舍入的零值来解决问题?

以下是我对EM算法的实现:

首先,我使用kmeans 初始化组件的均值和协方差:

load('data1.mat');

X = Data'; % 300x3 data set
D = size(X,2); % dimension
N = size(X,1); % number of samples
K = 4; % number of Gaussian Mixture components

% Initialization
p = [0.2, 0.3, 0.2, 0.3]; % arbitrary pi
[idx,mu] = kmeans(X,K); % initial means of the components

% compute the covariance of the components
sigma = zeros(D,D,K);
for k = 1:K
    sigma(:,:,k) = cov(X(idx==k,:));
end

对于 E-step ,我使用以下公式来计算责任 responsibility

以下是相应的代码:

% variables for convergence 
converged = 0;
prevLoglikelihood = Inf;
prevMu = mu;
prevSigma = sigma;
prevPi = p;
round = 0;
while (converged ~= 1)
    round = round +1
    gm = zeros(K,N); % gaussian component in the nominator - 
                     % some values evaluate to zero
    sumGM = zeros(N,1); % denominator of responsibilities
    % E-step:  Evaluate the responsibilities using the current parameters
    % compute the nominator and denominator of the responsibilities
    for k = 1:K
        for i = 1:N
             % HERE values evalute to zero e.g. exp(-746.6228) = -Inf
             gm(k,i) = p(k)/sqrt(det(sigma(:,:,k))*(2*pi)^D)*exp(-0.5*(X(i,:)-mu(k,:))*inv(sigma(:,:,k))*(X(i,:)-mu(k,:))');
             sumGM(i) = sumGM(i) + gm(k,i);
         end
    end

    % calculate responsibilities
    res = zeros(K,N); % responsibilities
    Nk = zeros(4,1);
    for k = 1:K
        for i = 1:N
            res(k,i) = gm(k,i)/sumGM(i);
        end
        Nk(k) = sum(res(k,:));
    end

Nk(k)使用M步骤中给出的公式计算。

M-步骤

reestimate parameters using current responsibilities

    % M-step: Re-estimate the parameters using the current responsibilities
    mu = zeros(K,3);
    for k = 1:K
        for i = 1:N
            mu(k,:) = mu(k,:) + res(k,i).*X(k,:);
            sigma(:,:,k) = sigma(:,:,k) + res(k,i).*(X(k,:)-mu(k,:))*(X(k,:)-mu(k,:))';
        end
        mu(k,:) = mu(k,:)./Nk(k);
        sigma(:,:,k) = sigma(:,:,k)./Nk(k);
        p(k) = Nk(k)/N;
    end

现在为了检查收敛性,使用以下公式计算对数似然: log-likelihood

    % Evaluate the log-likelihood and check for convergence of either 
    % the parameters or the log-likelihood. If not converged, go to E-step.
    loglikelihood = 0;
    for i = 1:N
        loglikelihood = loglikelihood + log(sum(gm(:,i)));
    end


    % Check for convergence of parameters
    errorLoglikelihood = abs(loglikelihood-prevLoglikelihood);
    if (errorLoglikelihood <= eps)
        converged = 1; 
    end

    errorMu = abs(mu(:)-prevMu(:));
    errorSigma = abs(sigma(:)-prevSigma(:));
    errorPi = abs(p(:)-prevPi(:));

    if (all(errorMu <= eps) && all(errorSigma <= eps) && all(errorPi <= eps))
        converged = 1;
    end

    prevLoglikelihood = loglikelihood;
    prevMu = mu;
    prevSigma = sigma;
    prevPi = p;

end % while 

第一轮结束后loglikelihood约为700。 在第二轮中,它是-Inf,因为E步骤中的一些gm(k,i)值为零。因此,对数显然是负无穷大。

零值也会导致sumGM等于零,从而导致musigma矩阵内的所有NaN条目。

我该如何解决这个问题? 你能告诉我我的实施是否有问题吗? 可以通过某种方式提高Matlab的精度来解决吗?

修改

我在gm(k,i)中添加了exp()项的缩放。 不幸的是,这并没有多大帮助。经过多次回合后,我仍然遇到了下溢问题。

scale = zeros(N,D);
for i = 1:N
    max = 0;
    for k = 1:K
        Xmu = X(i,:)-mu(k,:);
        if (norm(scale(i,:) - Xmu) > max)
            max = norm(scale(i,:) - Xmu);
            scale(i,:) = Xmu;
        end
    end
end


for k = 1:K
    for i = 1:N
        Xmu = X(i,:)-mu(k,:);
        % scale gm to prevent underflow
        Xmu = Xmu - scale(i,:);
        gm(k,i) = p(k)/sqrt(det(sigma(:,:,k))*(2*pi)^D)*exp(-0.5*Xmu*inv(sigma(:,:,k))*Xmu');
        sumGM(i) = sumGM(i) + gm(k,i);
    end
end

此外,我注意到kmeans初始化的方法与下一轮相比完全不同,其中平均值在M步骤中计算。

k均值:

mu =   13.500000000000000   0.026602138870044   0.062415945993735
       88.500000000000000  -0.009869960132085  -0.075177888210981
       39.000000000000000  -0.042569305020309   0.043402772876513
       64.000000000000000  -0.024519281362918  -0.012586980924762
在M步之后

round = 2

mu = 1.000000000000000   0.077230046948357   0.024498886414254
     2.000000000000000   0.074260118474053   0.026484346404660
     3.000000000000002   0.070944016105476   0.029043085983168
     4.000000000000000   0.067613431480832   0.031641849205021

在下一轮mu根本没有变化。它与第2轮保持不变。

我猜这是因为gm(k,i)的下溢引起的? 要么我的缩放实现不正确,要么算法的整个实现在某处错误:(

编辑2

经过四轮比赛后,我获得了NaN个值,并更详细地研究了gm。仅查看一个样本(并且没有0.5因子),gm在所有组件中变为零。放入matlab gm(:,1) = [0 0 0 0]。这反过来导致sumGM等于零 - &gt; NaN,因为我除以零。我在

中提供了更多详细信息
round = 1

mu = 62.0000   -0.0298   -0.0078
     37.0000   -0.0396    0.0481
     87.5000   -0.0083   -0.0728
     12.5000    0.0303    0.0614

gm(:,1) = [11.7488, 0.0000, 0.0000, 0.0000]


round = 2

mu = 1.0000    0.0772    0.0245
     2.0000    0.0743    0.0265
     3.0000    0.0709    0.0290
     4.0000    0.0676    0.0316


gm(:,1) = [0.0000, 0.0000, 0.0000, 0.3128]

round = 3

mu = 1.0000    0.0772    0.0245
     2.0000    0.0743    0.0265
     3.0000    0.0709    0.0290
     4.0000    0.0676    0.0316


gm(:,1) = [0, 0, 0.0000, 0.2867]


round = 4


mu = 1.0000    0.0772    0.0245
        NaN       NaN       NaN
     3.0000    0.0709    0.0290
     4.0000    0.0676    0.0316

gm(:,1) = 1.0e-105 * [0, NaN, 0, 0.5375]

首先,手段似乎没有变化,与kmeans的初始化相比完全不同。

根据gm(:,1)的输出,每个样本(不仅仅是像这里的第一个样本)仅对应于一个高斯分量。不应该将样本部分分发&#34;在每个高斯组件中?

EDIT3:

所以我猜mu不改变的问题是M步骤的第一行:mu = zeros(K,3);

为了解决下溢问题,我目前正在尝试使用高斯的日志:

function logPdf = logmvnpdf(X, mu, sigma, D)
    Xmu = X-mu;
    logPdf = log(1/sqrt(det(sigma)*(2*pi)^D)) + (-0.5*Xmu*inv(sigma)*Xmu');
end

新问题是协方差矩阵sigma。 Matlab声称: 警告:矩阵接近单一或严重缩放。结果可能不准确。

经过6轮后,我得到了gm(高斯分布)的虚数值。

更新的E-Step现在看起来像这样:

gm = zeros(K,N); % gaussian component in the nominator
sumGM = zeros(N,1); % denominator of responsibilities


for k = 1:K
    for i = 1:N
        %gm(k,i) = p(k)/sqrt(det(sigma(:,:,k))*(2*pi)^D)*exp(-0.5*Xmu*inv(sigma(:,:,k))*Xmu');
        %gm(k,i) = p(k)*mvnpdf(X(i,:),mu(k,:),sigma(:,:,k));
        gm(k,i) = log(p(k)) + logmvnpdf(X(i,:), mu(k,:), sigma(:,:,k), D);
        sumGM(i) = sumGM(i) + gm(k,i);
    end
end

1 个答案:

答案 0 :(得分:3)

看起来你应该能够使用比例因子标度(i)将gm(k,i)带入可表示的范围,因为如果你按比例(i)乘以gm(k,i),这将结束也可以将sumGM(i)相乘,并在计算出res(k,i)= gm(k,i)/ sumGM(i)时取消。

理论上我会使scale(i)= 1 / max_k(exp(-0.5 *(X(i,:) - mu(k,:)))并且实际上在不进行求幂的情况下进行计算,所以你最终处理它的日志,max_k(-0.5 *(X(i,:)) - mu(k,:)) - 这给你一个可以添加到每个-0.5 *(X(i,:))的通用术语 - 在使用exp()之前mu(k,:)并且将至少保持在可表示范围内的最大值 - 在此更正之后仍然下溢到零的任何事情你都不关心,因为它与其他贡献相比非常小