hmmlearn解码隐藏状态的变化

时间:2018-04-05 03:31:30

标签: python machine-learning hidden-markov-models

我将这个Python库用于隐马尔可夫模型。 https://github.com/hmmlearn/hmmlearn

    //use whole sequence for HMM training
    rescaled_model = GaussianHMM(n_components= 3, covariance_type="full", n_iter=2000).fit(rescaled_A)
    rescaled_hidden_states = rescaled_model.predict(rescaled_A)

//use partial sequence for HMM training
    train_len = int(len(rescaled_A) * 0.999)
    model_test = GaussianHMM(n_components= 3, covariance_type="full", n_iter=2000).fit(rescaled_A**[:train_len]**)
    hidden_states_test = model_test.predict(rescaled_A)

    print('rescaled_hidden_states: ', collections.Counter(rescaled_hidden_states))
    print('hidden_states_test: ', collections.Counter(hidden_states_test))

正如我们所看到的,解码后的隐藏状态改变了训练/测试数据分割的b / c。第一种情况有2305隐藏状态1,而第二种情况有2305隐藏状态0.这改变了标签,这使得无法正确预测下一个隐藏状态并解释其含义。

rescaled_hidden_states:  Counter({1: 2305, 2: 1537, 0: 744})
hidden_states_test:  Counter({0: 2305, 2: 1534, 1: 747})

0 个答案:

没有答案