HMM_Forward_Backward

HMM_Forward_Backward

算法原理简述:

本篇博客主要是介绍隐马尔科夫算法中在给定A,B,PI参数的情况下,计算观测序列出现概率的算法。主要包括了前向算法和后向算法。
其中前向算法主要是定义函数alpha(t, i),表示在t时刻,其处在状态i,且(t1,t2,…t)的观测序列为(o1, o2,…)的概率;而后通过递推式,利用动态规划的方法来计算即可。
后向算法是定义函数beta(t, i);表示在t时刻处在状态i,且(t+1, t+2,…T)的观测序列为(o(t+1), o(t+2),…o(T))的概率;也是通过递归公式和动态规划的思想来实现计算过程。

代码如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
# @File : HMM_forward_backward.py
# @Author: mfzhu
# @Date : 2017/5/4
# @Desc :
import numpy as np
import copy
class HMM_forward(object):
def __init__(self, state2state, state2observation, pi):
self.state2state = state2state
# 初始化状态转移矩阵
self.state2observation = state2observation
# 初始化观测转移矩阵
self.pi = pi
# 初始化初始状态概率向量
self.num_state = len(state2state)
# 确定模型中的状态数
def alpha_func(self, time, sequence):
alpha = pi * self.state2observation[:, sequence[0]]
# 初始化time=1的alpha向量
for i in range(1, time):
nextAlpha = []
# 保存下一轮迭代的alpha向量
for j in range(self.num_state):
perState = alpha * self.state2state[:, j]
# 计算每种状态转移的值
nextAlpha.append(np.sum(perState) * self.state2observation[j, sequence[i]])
# 当前状态转移的结果保存
alpha = copy.deepcopy(nextAlpha)
# 更新alpha向量
return np.array(alpha)
class HMM_backward(object):
def __init__(self, state2state, state2observation, pi):
self.state2state = state2state
# 初始化状态转移矩阵
self.state2observation = state2observation
# 初始化观测转移矩阵
self.pi = pi
# 初始化初始状态概率向量
self.num_state = len(state2state)
# 确定模型中的状态数
def beta_func(self, time, sequence):
beta = np.ones([1, self.num_state])[0]
# 初始化beta向量
for i in range(len(sequence), time, -1):
prebeta = []
# 保存下一轮的beta向量
if i == 1:
beta = beta * pi * self.state2observation[:, sequence[0]]
# 当time=1的时候和pi向量进行乘积
break
for j in range(self.num_state):
prebeta.append(np.sum(self.state2state[j, :] * self.state2observation[:, sequence[i - 1]] * beta))
beta = copy.deepcopy(prebeta)
# 更新beta向量
return np.array(beta)
if __name__ == '__main__':
state2state = np.array([[0.5, 0.2, 0.3], [0.3, 0.5, 0.2], [0.2, 0.3, 0.5]])
state2observation = np.array([[0.5, 0.5], [0.4, 0.6], [0.7, 0.3]])
pi = np.array([0.2, 0.4, 0.4])
sequence = [0, 1, 0]
forward = HMM_forward(state2state, state2observation, pi)
backward = HMM_backward(state2state, state2observation, pi)
print("the alpha matrix:")
print(forward.alpha_func(1, sequence))
print(forward.alpha_func(2, sequence))
print(forward.alpha_func(3, sequence))
print("the beta matrix: ")
print(backward.beta_func(1, sequence))
print(backward.beta_func(2, sequence))
print(backward.beta_func(3, sequence))

代码结果:

通过与李航老师书本的结果比较可以知道算法的正确性。

img