多元隐马尔可夫模型实现问题

时间:2018-10-01 19:52:36

标签: matlab machine-learning hidden-markov-models eye-tracking

我必须对眼动仪发出的信号进行分类。我有一个向量表示给定时间的眼睛速度。这个想法是,当速度低时,很有可能是固视,而当速度高时,则是扫视。从那以后,每个点都取决于前一个。这导致使用多元隐马尔可夫模型(HMM)来分类是否为扫视。该模型是两个状态系统,例如this。我总共要学习8个参数,每个高斯的均值和方差,以及每个状态的两个转移概率。为了估算参数,我在工具箱PMTK3中使用了MATLAB。我还没有找到其他允许高斯HMM的MATLAB工具箱。我的代码如下:

exampleData = [25.2015   24.1496   33.0422   21.9321   15.5897    9.1592   19.9374   15.2868    9.6767   39.8610   22.2483   31.6508]
prior.mu = [10 10];
prior.Sigma = [0.5; 0.5];
prior.k = 2;
prior.dof = prior.k + 1;
model = hmmFit(data, 2, 'gauss', 'verbose', true, 'piPrior', [3 2], ...
    'emissionPrior', prior, 'nRandomRestarts', 2, 'maxIter', 10);

据我了解,prior.k是它应该找到多少个群集,应该是两个群集:扫视和注视。当我运行它时,它将输出此错误消息:

Error using chol
Matrix must be positive definite.
Error in gaussSample (line 20)
A = chol(Sigma, 'lower');
Error in kmeansFit (line 42)
    noise = gaussSample(zeros(1, length(v)), 0.01*diag(v), K);
Error in kmeansInitMixGauss (line 7)
[mu, assign] = kmeansFit(data, K);
Error in mixGaussFit>initGauss (line 38)
        [mu, Sigma, model.mixWeight] = kmeansInitMixGauss(X, nmix);
Error in mixGaussFit>@(m,X,r)initGauss(m,X,r,initParams,prior) (line 24)
initFn = @(m, X, r)initGauss(m, X, r, initParams, prior);
Error in emAlgo (line 56)
model = init(model, data, restartNum);
Error in mixGaussFit (line 25)
[model, loglikHist] = emAlgo(model, data, initFn, @estep, @mstep , ...
Error in hmmFitEm>initWithMixModel (line 244)
    mixModel    = mixGaussFit(stackedData, nstates,  'verbose', false, 'maxIter', 10);
Error in hmmFitEm>initGauss (line 146)
        model = initWithMixModel(model, data);
Error in hmmFitEm>@(m,X,r)initFn(m,X,r,emissionPrior) (line 45)
initFn = @(m, X, r)initFn(m, X, r, emissionPrior);
Error in emAlgo (line 56)
model = init(model, data, restartNum);
Error in emAlgo (line 38)
        [models{i}, llhists{i}] = emAlgo(model, data, init, estep,...
Error in hmmFitEm (line 46)
[model, loglikHist] = emAlgo(model, data, initFn, @estep, @mstep, EMargs{:});
Error in hmmFit (line 69)
[model, loglikHist] = hmmFitEm(data, nstates, type, varargin{:}); 

当我尝试运行示例代码时,它可以工作,而且我似乎无法弄清原因:

data = [train4'; train5'];
data = data{2};
d = 13;

% test with a bogus prior
if 1
    prior.mu = ones(1, d);
    prior.Sigma = 0.1*eye(d);
    prior.k = d;
    prior.dof = prior.k + 2;
else 
    prior.mu = [1 3 5 2 9 7 0 0 0 0 0 0 1];
    prior.Sigma = randpd(d) + eye(d);
    prior.k = 12;
    prior.dof = 15;
end

model = hmmFit(data, 2, 'gauss', 'verbose', true, 'piPrior', [1 1], ...
    'emissionPrior', prior, 'nRandomRestarts', 2, 'maxIter', 10);

请向我解释我对HMM的误解

1 个答案:

答案 0 :(得分:0)

我尝试了很多工作,直到决定缩短数据长度,使它们的大小相同。那行得通,让我发现一些块起作用了,并且有些导致错误。经过仔细检查,这是因为数据中存在一些NaN,而HMM不知道如何处理它们。