在PyMC3中构建离散HMM的问题

时间:2017-02-09 20:27:16

标签: python hidden-markov-models pymc3

我正在使用PyMC3中的玩具离散HMM模型,我遇到了一些问题。

我的初始代码如下:

# Known transition and emission
with pymc3.Model() as hmm1:
    T = tt.as_tensor_variable(Tr)
    E = tt.as_tensor_variable(Er)
    # State models
    p0 = np.ones(num_states) / num_states
    # No shape, so each state is a scalar tensor
    states = [pymc3.Categorical('s0', p=p0)]
    emissions = [pymc3.Categorical('z0', p=E[:,states[0]], observed=Zr[:,0])]
    for i in range(1, num_times):
        states.append(pymc3.Categorical('s{0}'.format(i), p=T[:,states[i-1]]))
        emissions.append(pymc3.Categorical('z{0}'.format(i), p=E[:,states[i]], observed=Zr[:,i]))

此处TrEr是真正的过渡和排放矩阵。

我的问题是:

  1. 此模型似乎没有探索状态的完整值,并且它保持每个状态的单个值(请参阅笔记本)。

  2. 我还没有找到一种方法来定义statesemissions以更加pymc-tonic的方式,例如使用shape=...

  3. 此外,当我扩展模型以考虑未知的转换或发射矩阵时,我遇到了索引问题,我被迫使用theano.tensor.clip,如下一个代码所示:< / p>

    # Unknown transitions and emissions
    with pymc3.Model() as hmm3:
        # Transition "matrix"
        a_t = np.ones((num_states,num_states))
        T = pymc3.Dirichlet('T', a=a_t, shape=(num_states,num_states))
        # Emission "matrix"
        a_e = np.ones((num_emissions, num_states))
        E = pymc3.Dirichlet('E', a=a_e, shape=(num_emissions, num_states))
        # State models
        p0 = np.ones(num_states) / num_states
        # No shape, so each state is a scalar tensor
        states = [pymc3.Categorical('s0', p=p0)]
        clp_state = tt.clip(states[0], 0, num_states-1)
        emissions = [pymc3.Categorical('z0', p=E[:,clp_state], observed=Zr[0])]
        for i in range(1, num_times):
            clp_prev_state = tt.clip(states[i-1], 0, num_states-1)
            states.append(pymc3.Categorical('s{0}'.format(i), p=T[:,clp_prev_state]))
            clp_state = tt.clip(states[i], 0, num_states-1)
            emissions.append(pymc3.Categorical('z{0}'.format(i), p=E[:,clp_state], observed=Zr[i]))
    
  4. 请参阅以下notebook with a complete code

1 个答案:

答案 0 :(得分:0)

定义隐马尔可夫状态模型的更多pymc3方式(或者更多的theano方式)如下:

class HMMStatesN(pm.Categorical):
    """
    Hidden Markov Model States
    Parameters
    ----------
    P : tensor
        transition probability
        shape = (N_states,N_states)

    PA : tensor
         equilibrium probabilities
         shape = (N_states)

    """

    def __init__(self, PA=None, P=None,
                 *args, **kwargs):
        super(pm.Categorical, self).__init__(*args, **kwargs)
        self.P = P
        self.PA = PA
        self.k = N_states
        self.mode = tt.cast(0,dtype='int64')

    def logp(self, x):
        P = self.P
        PA = self.PA

        # calculate equilibrium

        # now we need to create an array with probabilities
        # so that for x=A: PA=P1, PB=(1-P1)
        # and for x=B: PA=P2, PB=(1-P2)
        #P = tt.switch(x[:-1],P1,P2)

        PS = P[x[:-1]]

        x_i = x[1:]
        ou_like = pm.Categorical.dist(PS).logp(x_i)
        return pm.Categorical.dist(PA).logp(x[0]) + tt.sum(ou_like)

我还创建了类似的高斯发射类:

class HMMGaussianEmmisionsN(pm.Continuous):
    """
    Hidden Markov Model Gaussian Emmisions
    Parameters
    ----------
    A : tensor
        prior for Gaussian emmision mu
        shape = (2,N_states)

    S : tensor
        prior for Gaussian emmision width
        shape = (2,N_states)

    states : tensor
         equilibrium probabilities
         shape = (N_states)

    """

    def __init__(self, A=None, S=None, states=None,
                 *args, **kwargs):
        super(HMMGaussianEmmisionsN, self).__init__(*args, **kwargs)
        self.A = A
        self.S = S
        self.states = states
        self.mean = 0.

    def logp(self, x):
        A = self.A
        S = self.S
        states = self.states

        AS = A[states]        
        SS = S[states]

        ou_like = pm.Normal.dist(mu=AS,sd=SS).logp(x)
        return tt.sum(ou_like)

可以通过以下方式调用:

from scipy import optimize
with pm.Model() as model:
    # N_states state model

    P = pm.Dirichlet('P', a=np.ones((N_states,N_states)), shape=(N_states,N_states))
    A = pm.InverseGamma('A',alpha=alphaA, beta=betaA, shape=(N_states))
    S = pm.InverseGamma('S', alpha=alphaS, beta=betaS, shape=(N_states))

    AA = tt.dmatrix('AA')
    AA = tt.eye(N_states) - P + tt.ones(shape=(N_states,N_states))
    PA = pm.Deterministic('PA',sla.solve(AA.T,tt.ones(shape=(N_states))))

    states = HMMStatesN('states',PA=PA, P=P, shape=(len(measurement)))

    emmision = HMMGaussianEmmisionsN('emmision', A=A, S=S, states=states, observed=measurement)

    start = pm.find_MAP(fmin=optimize.fmin_powell)
    step1 = pm.Metropolis(vars=[P,A,S,PA,emmision])
    step2 = pm.CategoricalGibbsMetropolis(vars=[states])
    trace = pm.sample(10000, start=start, step=[step1,step2])

我认为你的模型可以用类似的方式编码。对于HMMstates,使用CategoricalGibbsMetropolis步进器(请参阅此question以获取更多详细信息)非常重要。请参阅更详细的代码示例here