Kolmogorov's criterion

In probability theory, Kolmogorov's criterion, named after Andrey Kolmogorov, is a theorem giving a necessary and sufficient condition for a Markov chain or continuous-time Markov chain to be stochastically identical to its time-reversed version.

Discrete-time Markov chains

The theorem states that an irreducible, positive recurrent, aperiodic Markov chain with transition matrix P is reversible if and only if its stationary Markov chain satisfies[1]

p j 1 j 2 p j 2 j 3 p j n 1 j n p j n j 1 = p j 1 j n p j n j n 1 p j 3 j 2 p j 2 j 1 {\displaystyle p_{j_{1}j_{2}}p_{j_{2}j_{3}}\cdots p_{j_{n-1}j_{n}}p_{j_{n}j_{1}}=p_{j_{1}j_{n}}p_{j_{n}j_{n-1}}\cdots p_{j_{3}j_{2}}p_{j_{2}j_{1}}}

for all finite sequences of states

j 1 , j 2 , , j n S . {\displaystyle j_{1},j_{2},\ldots ,j_{n}\in S.}

Here pij are components of the transition matrix P, and S is the state space of the chain.

Example

Consider this figure depicting a section of a Markov chain with states i, j, k and l and the corresponding transition probabilities. Here Kolmogorov's criterion implies that the product of probabilities when traversing through any closed loop must be equal, so the product around the loop i to j to l to k returning to i must be equal to the loop the other way round,

p i j p j l p l k p k i = p i k p k l p l j p j i . {\displaystyle p_{ij}p_{jl}p_{lk}p_{ki}=p_{ik}p_{kl}p_{lj}p_{ji}.}

Proof

Let X {\displaystyle X} be the Markov chain and denote by π {\displaystyle \pi } its stationary distribution (such exists since the chain is positive recurrent).

If the chain is reversible, the equality follows from the relation p j i = π i p i j π j {\displaystyle p_{ji}={\frac {\pi _{i}p_{ij}}{\pi _{j}}}} .

Now assume that the equality is fulfilled. Fix states s {\displaystyle s} and t {\displaystyle t} . Then

P ( X n + 1 = t , X n = i n , , X 0 = s | X 0 = s ) {\displaystyle {\text{P}}(X_{n+1}=t,X_{n}=i_{n},\ldots ,X_{0}=s|X_{0}=s)} = p s i 1 p i 1 i 2 p i n t {\displaystyle =p_{si_{1}}p_{i_{1}i_{2}}\cdots p_{i_{n}t}} = p s t p t s p t i n p i n i n 1 p i 1 s {\displaystyle ={\frac {p_{st}}{p_{ts}}}p_{ti_{n}}p_{i_{n}i_{n-1}}\cdots p_{i_{1}s}} = p s t p t s P ( X n + 1 {\displaystyle ={\frac {p_{st}}{p_{ts}}}{\text{P}}(X_{n+1}} = s , X n {\displaystyle =s,X_{n}} = i 1 , , X 0 = t | X 0 = t ) {\displaystyle =i_{1},\ldots ,X_{0}=t|X_{0}=t)} .

Now sum both sides of the last equality for all possible ordered choices of n {\displaystyle n} states i 1 , i 2 , , i n {\displaystyle i_{1},i_{2},\ldots ,i_{n}} . Thus we obtain p s t ( n ) = p s t p t s p t s ( n ) {\displaystyle p_{st}^{(n)}={\frac {p_{st}}{p_{ts}}}p_{ts}^{(n)}} so p s t ( n ) p t s ( n ) = p s t p t s {\displaystyle {\frac {p_{st}^{(n)}}{p_{ts}^{(n)}}}={\frac {p_{st}}{p_{ts}}}} . Send n {\displaystyle n} to {\displaystyle \infty } on the left side of the last. From the properties of the chain follows that lim n p i j ( n ) = π j {\displaystyle \lim _{n\to \infty }p_{ij}^{(n)}=\pi _{j}} , hence π t π s = p s t p t s {\displaystyle {\frac {\pi _{t}}{\pi _{s}}}={\frac {p_{st}}{p_{ts}}}} which shows that the chain is reversible.

Continuous-time Markov chains

The theorem states that a continuous-time Markov chain with transition rate matrix Q is, under any invariant probability vector, reversible if and only if its transition probabilities satisfy[1]

q j 1 j 2 q j 2 j 3 q j n 1 j n q j n j 1 = q j 1 j n q j n j n 1 q j 3 j 2 q j 2 j 1 {\displaystyle q_{j_{1}j_{2}}q_{j_{2}j_{3}}\cdots q_{j_{n-1}j_{n}}q_{j_{n}j_{1}}=q_{j_{1}j_{n}}q_{j_{n}j_{n-1}}\cdots q_{j_{3}j_{2}}q_{j_{2}j_{1}}}

for all finite sequences of states

j 1 , j 2 , , j n S . {\displaystyle j_{1},j_{2},\ldots ,j_{n}\in S.}

The proof for continuous-time Markov chains follows in the same way as the proof for discrete-time Markov chains.

References

  1. ^ a b Kelly, Frank P. (1979). Reversibility and Stochastic Networks (PDF). Wiley, Chichester. pp. 21–25.