Introduction
In this part, we will continue our study of continuous-time Markov chains. This time we will look at their long-term behavior, and how to compute important quantities for these.
Long-Term Behavior
Definition: Limiting and Stationary Distribution
A probability vector $v$ represents a limiting distribution if, for all states $i$ and $j$, $$ \lim_{t \to \infty} P_{ij}(t) = v_j. $$
Definition: Stationary Distribution
A probability vector $v$ represents a stationary distribution if, for all $t \geq 0$, $$ v = v P(t). $$ In the context of continuous-time Markov chains, $v$ is a stationary distribution if and only if $0 = v Q$, where $Q$ is the generator matrix of the chain.
Proof: Stationary Distribution and Generator Matrix
Firstly, let $vQ = 0$, $$ \begin{align*} \frac{d}{dt} (v P(t)) & \coloneqq v \frac{d}{dt} P(t) \newline & \eqqcolon v Q P(t) \newline & = 0 \newline \end{align*} $$ as $vQ = 0$ by assumption. Thus, $v P(t)$ is constant in $t$, i.e., $v P(t) = v P(0) = v I = v$ for all $t \geq 0$.
Conversely, let $v$ be a stationary distribution, i.e., $v = v P(t)$ for all $t \geq 0$. Then, $$ \begin{align*} \frac{d}{dt} (v P(t)) & \coloneqq v \frac{d}{dt} P(t) \newline & \eqqcolon v Q P(t) \newline & = 0 \newline \end{align*} $$ as $v P(t)$ is constant in $t$ by assumption. In particular, for $t = 0$ we get $v Q P(0) = v Q I = v Q = 0$. Thus, the proof is complete $_\blacksquare$.
Discrete-Time Markov Chains VS. Continuous-Time Markov Chains in Long-Term Behavior
Recall: Long-Term Behavior of Discrete-Time Markov Chains
Recall that an ergodic (discrete-time) Markov chain has a unique positive stationary distribution that is the limiting distribution.
Further, for discrete-time chains, $v$ is stationary if and only if $v = v P$, where $P$ is the transition matrix of the chain. Lastly, a discrete-time chain is ergodic if it is irreducible (i.e., it is possible to get to any state from any state), aperiodic (i.e., the chain does not get “stuck” in cycles), and all states have finite expected return time.
Intuition: Long-Term Behavior of Continuous-Time Markov Chains
A continuous-time Markov chain is irreducible if for all $i$ and $j$ there exists a $t > 0$ such that $P_{ij}(t) > 0$, i.e., it is possible to get to any state from any state.
However, periodic continuous-time Markov chains do not exist. If $P_{ij}(t) > 0$ for some $t > 0$, then $P_{ij}(t) > 0$ for all $t > 0$.
Theorem: Fundamental Limit Theorem for Continuous-Time Markov Chains
Let $\{X_t\}_{t \geq 0}$ be a finite, irreducible, continuous-time Markov chain with transition function $P(t)$. Then there exists a unique positive stationary distribution vector $v$ which is also the limiting distribution.
The limiting distribution of such a chain can be found as the unique $v$ satisfying $v Q = 0$
Absorbing States
Definition: Absorbing State
An absorbing state is one where the rate of leaving it is zero.
Assume $\{X_t\}_{t \geq 0}$ is a continuous-time Markov chain with $k$ states. Assume the last state is absorbing and the rest are not (If the chain is irreducible they are then transient).
WE have that $q_k = 0$ and the entire last row must consist of zeros. We thus get, $$ Q = \begin{bmatrix} V & \star \newline \mathbf{0} & 0 \newline \end{bmatrix}. $$ Let $F$ be the $(k - 1) \times (k - 1)$ matrix such that $F_{ij}$ (with $i < k, j < k$) is the expected time spent in state $j$ when the chain starts in $i$. We can show that $F = -V^{-1}$.
Proof: Expected Time in Transient States Before Absorption
Generally, we can define $D$ as the matrix with $(\frac{1}{q_1}, \ldots, \frac{1}{q_k})$ along its diagonal, with all other entries zero. If there are no absorbing states, $$ \tilde{P} = DQ + I_k. $$ Write $A\_$ for a square matrix without its last row and column, so, e.g., $Q\_ = V$.
If the last state is absorbing, i.e., $q_k = 0$, we get, $$ \tilde{P}\_ = D\_ Q\_ + I_{k - 1}. $$ Further, let $F^{\prime}$ be the matrix where $F^{\prime}_{ij}$ is the expected number of stays in state $j$ before absorbtion when starting in state $i$. As the lengths of stays and changes in states are independent, we get $F^{\prime}_{ij} = F_{ij} \frac{1}{q_j}$ and thus, $$ F = F^{\prime} D\_. $$ By the theory for discrete-time Markov chains with absorbing states, we have, $$ F^{\prime} = (I_{k - 1} - \tilde{P}\_)^{-1}. $$ Thus, $$ \begin{align*} F & = F^{\prime} D\_ \newline & = (I_{k - 1} - \tilde{P}\_)^{-1} D\_ \newline & = (I_{k - 1} - (D\_ Q\_ + I_{k - 1}))^{-1} D\_ \newline & = (-D\_ Q\_)^{-1} D\_ \newline & = -Q\_^{-1} D\_^{-1} D\_ \newline & = -Q\_^{-1} \newline & \eqqcolon -V^{-1} \ _\blacksquare \end{align*} $$
$F$ is called the fundamental matrix (similar to the discrete-time case).
Note
If the chain starts in state $i$, the expected time until absorption is the sum of the $i$-th row of $F$. Thus, the expected times until absorption are given by the matrix product $F1$ of $F$ with a column of 1s.
Stationary Distribution of the Embedded Markov Chain
Intuition: Stationary Distribution of the Embedded Markov Chain
The embedded chain of a continuous-time Markov chain: The discrete-time Markov chain where holding times are ignored.
The stationary distribution for the embedded chain and for the continuous-time chain are generally not the same!
However, there is a simple relationship: A probability vector $\pi$ is a stationary distribution for a continuous-time Markov chain if and only if $\psi$ is a stationary distribution for the embeddec hain, where $\psi_j = C \pi_j q_j$ for a constant $C$ making the entries sum to 1.
Proof: Stationary Distribution of the Embedded Markov Chain
Using notation as above, we have $\tilde{P} = DQ + I$. For any vector $v$, we get, $$ v \tilde{P} = v (DQ + I) = v DQ + v. $$ so $v \tilde{P} = v$ if and only if $v DQ = 0$.