next up previous
Next: A Change-point Detection Algorithm Up: Segmental Semi-Markov Models for Previous: Introduction


A Viterbi-like Algorithm to Compute the Most Likely State Sequence (MLSS)

To find the most likely state (i.e., segment label) sequence $\hat{\mathbf{s}}=s_1 s_2 \ldots s_t \ldots$ for a data sequence $\mathbf{y}=y_1 y_2 \ldots y_t \ldots$, we have the following recursive Viterbi-like algorithm based on dynamic programming. At each time $t$, this algorithm calculates the quantity $\hat{p}_i^{(t)}$ for each for each state $i$, $1 \leq i \leq K$, where $\hat{p}_i^{(t)}$ is defined as

\begin{displaymath}
\hat{p}_i^{(t)}=\max_{\mathbf{s}} \{p(\mathbf{s}\vert y_1y_2\ldots
y_t) \vert \mathbf{s}=s_1s_2\ldots s_t,~s_t=i\}.
\end{displaymath} (2)

In other words, $\hat{p}_i^{(t)}$ is the likelihood of the most likely state sequence that ends with state $i$ (i.e., $y_t$ is the last point of segment $i$).

The recursive function for calculating $\hat{p}_i^{(t)}$ is

\begin{displaymath}
\hat{p}_i^{(t)}=\max_{d_i} \left( \max_j \hat{p}_j^{(t-d_i)}
A_{ji} \right)
p(d_i) p(y_{t-d_i+1} \ldots y_t\vert\theta_i)
\end{displaymath} (3)

In the above equation, the outer maximization ($\max_{d_i}$) is over all possible values of the duration $d_i$ of state $i$. Recall that $y_t$ is now fixed to be the last point of segment $i$, the last point of the previous segment will be $t-d_i$. For a given $d_i$, the inner maximization ($\max_j$) is over all possible previous states $j$ that transitions to state $i$ at time $t-{d_i}$. The state $j$ and the time $t-d_i$ for the maximum value $\hat{p}_i^{(t)}$ are recorded in a table denoted as $PREV(i,t)$.

Obviously, the overall most likely state sequence for the data sequence $y_1 y_2 \ldots y_t$ will be the state sequence with the likelihood $\max_i \hat{p}_i^{(t)}$, and can be found by tracing back using the table $PREV(i,t)$.


next up previous
Next: A Change-point Detection Algorithm Up: Segmental Semi-Markov Models for Previous: Introduction
Xianping Ge
2000-05-16