Skip to content

Commit a686384

Browse files
authored
Merge pull request #1 from xueyu-ubc/master
add markov-chain
2 parents a173a35 + 37bcc4a commit a686384

File tree

3 files changed

+361
-0
lines changed

3 files changed

+361
-0
lines changed

_posts/Markov-Chains.md

Lines changed: 361 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,361 @@
1+
---
2+
title: Discrete Markov Chains
3+
date: 2023-07-12 18:00:00
4+
tags: Markov Chains
5+
categories: Reading Group
6+
mathjax: true
7+
---
8+
9+
10+
# Markov Chains: Definitions and Representations
11+
12+
A stochastic process $X = \{ x(t): t\in T\}$ is a collection of random variables.
13+
14+
There are two elements:
15+
16+
- Time $t$:
17+
* discrete time ($T$ is a countably infinite set; under this case, we call 'Markov chain')
18+
* continuous time (under this case, we call 'Markov process')
19+
- Space $\Omega$:
20+
* discrete space ($X_{t}$ comes from a countably infinite set)
21+
* continuous space.
22+
23+
Markov chain is a **discrete-time** process for which the future behaviour, given the past and the present, only depends on the present and not on the past.
24+
25+
Markov process is the **continuous-time** version of a Markov chain.
26+
27+
>Definition 1.[Markov chain] A discrete time stochastic process $ X_0, X_1, X_2, $. . . is a Markov chain if
28+
$$
29+
P(X_{t} = a_t | X_{t-1} = a_{t-1}, X_{t-2} = a_{t-2}, ..., X_0 = a_0) = P(X_{t} = a_t | X_{t-1} = a_{t-1}) = P_{a_{t-1}, a_{t}}
30+
$$
31+
32+
Remark 1: This is time-homogeneous markov chain, for $\forall t$, for $\forall a_{t-1}, a_{t} \in \Omega$, the transition probability $P_{a_{t-1}, a_{t} }$ is the same.
33+
34+
Remark 2: In DDPM, it is not a time-homogeneous chain, as the transition probability at t is obtained by a network(t).
35+
36+
The state $X_{t}$ depends on the previous state $X_{t-1}$ but is independent of the particular history $X_{t-2}, X_{t-3},...$. This is called the **Markov property** or **memoryless property**.
37+
38+
The Markov property does not imply that $X_{t}$ is independent of the random variables $X_{0}$, $X_{1}$,..., $X_{t-2}$; it just implies that **any dependency of $X_{t}$ on the past is captured in the value of $X_{t-1}$**.
39+
40+
The Markov chain is **uniquely** defined by the one-step transition probability matrix P:
41+
$$
42+
P =
43+
\begin{pmatrix}
44+
P_{0,0} & P_{0, 1} & \cdots & P_{0, j} & \cdots\\
45+
\vdots & \vdots & \ddots & \vdots& \vdots\\
46+
P_{i,0} & P_{i, 1} & \cdots & P_{i, j} & \cdots\\
47+
\vdots & \vdots & \ddots & \vdots& \vdots\\
48+
\end{pmatrix}
49+
$$
50+
where $P_{i,j}$ is the probability of transition from state $i$ to state $j$. $P_{i,j} = P(X_{t} = j| X_{t-1} = i), i,j \in \Omega$. For \forall $i$, $\sum_{j \geq 0} P_{i,j} = 1$.
51+
52+
# Classification of States
53+
For simplicity, we assume that the state space $\Omega$ is finite.
54+
## Communicating class
55+
56+
>Definition 2. [Communicating class] A state $j$ is reachable from state $i$ if there exists a positive integer $n$ such that $P_{i,j}^{(n)} > 0$. We write $i \rightarrow j$. If $j$ is reachable
57+
from $i$, and $i$ is reachable from $j$, then the states $i$ and $j$ are said to **communicate**, denoted by $i \leftrightarrow j$. A communicating class $C$ is a **maximal** set of states that communicate with each other. **No state in $C$ communicates with any state not in $C$.**
58+
59+
## Irreducible
60+
61+
>Definition 3: A Markov chain is **irreducible** if all states belong to **one** communicating class.
62+
63+
This means that **any state can be reached from any other state**. For $\forall i, j \in \Omega$, $P_{i,j} > 0$.
64+
65+
>Lemma 1. A finite Markov chain is irreducible if and only if its graph representation is a strongly connected graph.
66+
67+
### Transient vs Recurrent states
68+
69+
Let $r_{i,j}^{t}$ denote the probability that the chain, starting at state $i$, **the first time** transition to state $j$ occurs at time $t$. That is,
70+
$$
71+
r_{i,j}^{t} = P(X_{t} = j, X_{s} \neq j, \forall 1 \leq s \leq t-1 | X_{0} = i)
72+
$$
73+
74+
> Definition 4. A state is **recurrent** if $\sum_{t \geq 1} r_{i,i}^{t} = 1$ and it is **transient** if $\sum_{t \geq 1} r_{i,i}^{t} < 1$. A Markov chain is recurrent if every state in the chain is recurrent.
75+
76+
- If state i is recurrent then, once the chain visits that state, it will (with probability 1)
77+
eventually return to that state. Hence the chain will visit state $i$ over and over again,
78+
**infinitely** often.
79+
80+
- A transient state has the property that a Markov chain starting at this state returns to this state only **finitely often**, with probability 1.
81+
82+
- If one state in a communicating class is transient (respectively, recurrent) then all states in that class are transient (respectively, recurrent).
83+
84+
>Definition 5. An irreducible Markov chain is called recurrent if at least one (equivalently, every) state in this chain is recurrent. An irreducible Markov chain is called transient if at least one (equivalently, every) state in this chain is transient.
85+
86+
Let $\mu_{i} = \sum_{t \geq 1} t \cdot r_{i,i}^{t}$ denote the expected time to return to state $i$ when starting at state $i$.
87+
88+
>Definition 6. A state $i$ is **positive recurrent** if $\mu_{i} < \infty$ and **null recurrent** if $\mu_{i} = \infty$.
89+
90+
Here we give an example of a Markov chain that has null recurrent states. Consider the following markov chain whose states are the positive integers.
91+
92+
![Fig. 1. An example of a Markov chain that has null recurrent states ](./Markov-Chains/image.png)
93+
94+
Starting at state 1, the probability of not having returned to state 1 within the first $t$ steps is
95+
$$
96+
\prod_{j=1}^{t} \frac{j}{j+1} = \frac{1}{t+1}.
97+
$$
98+
The probability of never returning to state 1 from state 1 is 0, and state 1 is
99+
recurrent. Thus, the probability of the first time transition to state $1$ occurs at time $t$ is
100+
$$
101+
r_{1,1}^{t} = \frac{1}{t} \cdot \frac{1}{t+1} = \frac{1}{t(t+1)}.
102+
$$
103+
The expected number of steps until the first return to state 1 when starting at state 1 is
104+
$$
105+
\mu_{1} = \sum_{t = 1}^{\infty} t \cdot r_{1,1}^{t} = \sum_{t = 1}^{\infty} \frac{1}{t+1} = \infty.
106+
$$
107+
State 1 is recurrent but null recurrent.
108+
109+
>Lemma 2. In a finite Markov chain:
110+
>1. at least one state is recurrent; and
111+
>2. all recurrent states are positive recurrent.
112+
113+
Thus, all states of a finite, irreducible Markov chain are positive recurrent.
114+
115+
### Periodic vs Aperiodic states
116+
>Definition 7. A state $j$ in a discrete time Markov chain is **periodic** if there exists an integer $k>1$ such that $P(X_{t+s}= j | X_t = j) = 0$ unless $s$ is divisible by $k$. A discrete time Markov chain is periodic if any state in the chain is periodic. A state or chain that is not periodic is **aperiodic**.
117+
118+
A state $i$ is periodic means that for $s = k, 2k, 3k,...$, $P(X_{t+s}= j | X_t = j) > 0$.
119+
120+
**NB: k > 1**
121+
122+
### Ergodic
123+
>Definition 8. An **aperiodic**, **positive recurrent** state is an **ergodic** state. A Markov chain is ergodic if all its states are ergodic.
124+
125+
>Corollary 1. Any finite, irreducible, and aperiodic Markov chain is an ergodic chain.
126+
127+
### Stationary distribution
128+
129+
Consider the two-state “broken printer” Markov chain:
130+
131+
![Transition diagram for the two-state broken printer chain](./Markov-Chains/2023-07-22-11-00-52.png)
132+
133+
There are two state (0 and 1) in this Markov chain, and assume that the initial distribution is
134+
$$
135+
P(X_0 = 0) = \frac{\beta}{\alpha+\beta}, \qquad P(X_0 = 1) = \frac{\alpha}{\alpha+\beta}.
136+
$$
137+
Then, according to the transition probability matrix $P$, after one step, the distribution is
138+
$$
139+
\begin{align*}
140+
P(X_1 = 0) &= P(X_0 = 0)P(X_1 = 0 | X_0 = 0) + P(X_0 = 1)P(X_1 = 0 | X_0 = 1) \\
141+
&= \frac{\beta}{\alpha+\beta} \cdot (1-\alpha) + \frac{\alpha}{\alpha+\beta} \cdot \beta = \frac{\beta}{\alpha+\beta}, \\
142+
P(X_1 = 1) &= P(X_0 = 0)P(X_1 = 1 | X_0 = 0) + P(X_0 = 1)P(X_1 = 1 | X_0 = 1) \\
143+
&= \frac{\beta}{\alpha+\beta} \cdot \alpha + \frac{\alpha}{\alpha+\beta} \cdot (1-\beta) = \frac{\alpha}{\alpha+\beta}.
144+
\end{align*}
145+
$$
146+
Apparently, the distribution of $X_1$ is the same as the initial distribution. Similarly, we can prove that the distribution of $X_t$ is the same as the initial distribution for any $t$. Here, $\pi = (\frac{\beta}{\alpha+\beta}, \frac{\alpha}{\alpha+\beta})$ is called **stationary distribution**.
147+
148+
>Definition 9. A probability distribution $\pi = (\pi_i)$, $\sum_{i \in \Omega} \pi_i = 1$(**row vector**) on the state space $\Omega$ is called a **stationary distribution** (or an equilibrium distribution) for the Markov chain with transition probability matrix $P$ if $\pi = \pi P$, equivalently, $\pi_j = \sum_{i \in \Omega}\pi_i P_{i,j}$ for all $j \in \Omega$.
149+
150+
- One interpretation of the stationary distribution: if we started off a **thousand** Markov chains, choosing each starting position to be state $i$ with probability $\pi_i$, then(roughly) **$1000 \pi_j$** of them would be in state $j$
151+
at any time in the future – but not necessarily the same ones each time.
152+
153+
- If a chain ever reaches a stationary distribution then it maintains that distribution for all future time, and thus a stationary distribution represents a steady state or an equilibrium in the chain’s behavior.
154+
155+
#### Finding a stationary distribution
156+
Consider the following no-claims discount Markov chain with state space $\Omega = \{1,2,3\}$ and transition matrix
157+
$$
158+
P =
159+
\begin{pmatrix}
160+
\frac{1}{4} & \frac{3}{4} & 0\\
161+
\frac{1}{4} & 0 & \frac{3}{4}\\
162+
0 & \frac{1}{4} & \frac{3}{4}
163+
\end{pmatrix}
164+
$$
165+
166+
- Step 1:
167+
Assume $\pi = \{\pi_1, \pi_2, \pi_3\} $ is a stationary distribution. According to the definition 9 of stationary distribution, we need to solve the following equations:
168+
$$
169+
\begin{align*}
170+
\pi_1 &= \frac{1}{4}\pi_1 + \frac{1}{4}\pi_2, \\
171+
\pi_2 &= \frac{3}{4}\pi_1 + \frac{1}{4}\pi_3, \\
172+
\pi_3 &= \frac{3}{4}\pi_2 + \frac{3}{4}\pi_3.
173+
\end{align*}
174+
$$
175+
Adding the normalising condition $\pi_1 + \pi_2 + \pi_3 = 1$, we get four equations in three unknown parameters.
176+
177+
- Step 2:
178+
Choose one of the parameters, say $\pi_1$, and solve for the other two parameters in terms of $\pi_1$. We get
179+
$$
180+
\pi_1 = \frac{1}{4}\pi_1 + \frac{1}{4}\pi_2 \Rightarrow \pi_2 = 3\pi_1, \qquad \pi_3 = 3\pi_2 = 9\pi_1.
181+
$$
182+
- Step 3:
183+
Combining with the normalising condition, we get
184+
$$
185+
\pi_1 + 3\pi_1 + 9\pi_1 = 1 \Rightarrow \pi_1 = \frac{1}{13}, \qquad \pi_2 = \frac{3}{13}, \qquad \pi_3 = \frac{9}{13}.
186+
$$
187+
Finally, we get the stationary distribution $\pi = (\frac{1}{13}, \frac{3}{13}, \frac{9}{13})$.
188+
189+
#### Existence and uniqueness
190+
Given a Markov chaine, how can we know whether it has a stationary distribution? If it has, is it unique? At this part, we will answer these questions.
191+
192+
193+
Some notations:
194+
- Hitting time to hit the state $j$: $H_{j} = \min \{ t \in \{0, 1, 2,...\}: X_t = j\}$. Note that here we include time $t = 0$.
195+
196+
- Hitting probability to hit the state $j$ staring from state $i$: $h_{i,j} = P(X_t = j, \text{for some} \ t \geq 0 | X_0 = i) = P(H_{j} < \infty | X_0 = i) = \sum_{t \geq 0} r_{i,j}^{t}$.
197+
198+
Note that this is different from $r_{i,j}^{t}$, which denotes the probability that the chain, starting at state $i$, the **first** time transition to state $j$ **occurs at time $t$**.
199+
200+
We also have
201+
$$
202+
h_{i,j} =
203+
\begin{cases}
204+
\sum_{k \in \Omega}P_{i,k}h_{k,j} & , & \text{if} \quad j \ne i, \\
205+
1 & , & \text{if} \quad j = i.
206+
\end{cases}
207+
$$
208+
- Expected hitting time: $\eta_{i,j} = E(H_{j} | X_0 = i) = \sum_{t \geq 0} t \cdot r_{i,j}^{t}$. The expected time until we hit state $j$ starting from state $i$.
209+
We also have
210+
$$
211+
\eta_{i,j} =
212+
\begin{cases}
213+
1 + \sum_{k \in \Omega}P_{i,k}\eta_{k,j} & , & if j \ne i, \\
214+
0 & , & if j = i.
215+
\end{cases}
216+
$$
217+
(For the first case, we add 1 because we need to consider the first step from state $i$ to state $k$.)
218+
219+
- Return time: $M_i = \min \{ t \in \{1, 2,...\}: X_t = i\}$. It is different from $H_{i}$, as we exclude time $t = 0$. It is the first time that the chain returns to state $i$ after $t = 0$.
220+
- Return probability: $m_{i} = P(X_t = i \ \text{for some} \ n \geq 1 | X_0 = i) = P(M_i < \infty | X_0 = i) = \sum_{t>1}r_{i,i}^{t}.$
221+
- Expected return time: $\mu_{i} = E(M_i | X_0 = i) = \sum_{t \geq 1} t \cdot r_{i,i}^{t}$. The expected time until we return to state $i$ starting from state $i$.
222+
$$
223+
m_{i} = \sum_{j \in \Omega} P_{i,j}h_{j,i}, \qquad \mu_{i} = 1 + \sum_{j \in \Omega} P_{i,j}\eta_{j,i}.
224+
$$
225+
226+
----------------
227+
> Theorem 1. Consider an irreducible Markov chain (**finite or infinite**),
228+
> (1) if it is **positive recurrent**, $\exists$ an unique stationary distribution $\pi$, such that $\pi_i = \frac{1}{\mu_{i}}$.
229+
> (2) if it is **null recurrent** or **transient**, no stationary distribution exists.
230+
231+
Remark: If the chain is **finite** irreducible, it must be positive recurrent, thus it has an unique stationary distribution.
232+
233+
Remark: If the Markov chain is not irreducible, we can decompose the state space into several communicating classes. Then, we can consider each communicating class separately.
234+
- If none of the classes are positive recurrent, then no stationary distribution exists.
235+
- If exactly one of the classes is positive recurrent (and therefore closed), then there exists a unique stationary distribution, supported only on that closed class.
236+
- If more the one of the classes are positive recurrent, then many stationary distributions will exist.
237+
238+
Now, we give the proof of Theorem 1. We first prove that if a Markov chain is irreducible and positive recurrent, then there **exists** a stationary distribution. Next, we will prove the stationary distribution is **unique**. Since the second part with the null recurrent or transitive Markov chains is less important and more complicated, we will omit it. If you are interested in it, you can refer to the book [Markov Chains](https://www.statslab.cam.ac.uk/~james/Markov/) by James Norris.
239+
240+
Proof.
241+
(1) Suppose that $(X_0, X_1 ...)$ a recurrent Markov chain, which can be positive recurrent or null recurrent. Then we can desigh a stationary distribution as follows. (If we can desigh a stationary distribution, then it must be existed.)
242+
243+
Let $\nu_i$ be the expected number of visits to $i$ before we return back to $k$,
244+
$$
245+
\begin{align*}
246+
\nu_i &= \mathbb{E}(\# \text{visits to $i$ before returning to } k | X_0 = k) \\
247+
&= \mathbb{E}\sum_{t=1}^{M_k} P(X_t = i | X_0 = k) \\
248+
&= \mathbb{E}\sum_{t = 0}^{M_k - 1} P(X_t = i | X_0 = k)
249+
\end{align*}
250+
$$
251+
The last equation holds because of $ P(X_0 = i | X_0 = k) = 0$ and $ P(X_{M_k} = i | X_0 = k) = 0$.
252+
253+
If we want design a stationary distribution, it must statisfy $\pi P = \pi$ and $\sum_{i \in \Omega}\pi_i = 1$.
254+
255+
(a) We first prove that $\nu P = \nu$.
256+
$$
257+
\begin{align*}
258+
\sum_{i \in \Omega} \nu_i P_{i,j} &= \mathbb{E}\sum_{i \in \Omega} \sum_{t = 0}^{M_k - 1} P(X_t = i, X_{t+1} = j | X_0 = k) \\
259+
&= \mathbb{E}\sum_{t = 0}^{M_k - 1} \sum_{i \in \Omega} P(X_t = i, X_{t+1} = j | X_0 = k) \\
260+
&= \mathbb{E} \sum_{t = 0}^{M_k - 1} P(X_{t+1} = j | X_0 = k) \\
261+
&= \mathbb{E} \sum_{t = 1}^{M_k } P(X_{t} = j | X_0 = k) \\
262+
&= \mathbb{E} \sum_{t = 0}^{M_k - 1} \nu_i \\
263+
&= \nu_j.
264+
\end{align*}
265+
$$
266+
(b) Next, what we need to do is to normalize $\nu$ to get a stationary distribution. We have
267+
$$
268+
\sum_{i \in \Omega} \nu_i = \sum_{i \in \Omega} \mathbb{E} \sum_{t = 0}^{M_k - 1} P(X_t = i | X_0 = k) =\mathbb{E} \sum_{t = 0}^{M_k - 1} \sum_{i \in \Omega} P(X_t = i | X_0 = k) = E(M_k | X_0 = i) = \mu_k.
269+
$$
270+
Thus, we can define $\pi_i = \nu_i/\mu_k$, $\pi = \{\pi_i, i \in \Omega\}$ is one of the stationary distribution.
271+
272+
(2) Next, we prove that if a Markov chain is irreducible and positive recurrent, then the stationary distribution is **unique** and is given by $\pi_j = \frac{1}{\mu_j}$.
273+
274+
Given a stationary distribution $\pi$, if we prove that for all $i$, $\pi_j == \frac{1}{\mu_j}$, then we prove that the stationary distribution is unique.
275+
276+
Remember that the expected hitting time:
277+
$$
278+
\eta_{i,j} = 1 + \sum_{k \in \Omega}P_{i,k}\eta_{k,j}, j \ne i \qquad (eq:1)
279+
$$
280+
We multiply both sides of (eq:1) by $\pi_i$ and sum over $i (i \ne j)$ to get
281+
$$
282+
\sum_{i \ne j} \pi_i \eta_{i,j} = \sum_{i \ne j} \pi_i + \sum_{i \ne j} \sum_{k \in \Omega} \pi_i P_{i,k}\eta_{k,j}
283+
$$
284+
Since $\eta_{j,j} = 0$, we can rewrite the above equation as
285+
$$
286+
\sum_{i \in \Omega} \pi_i \eta_{i,j} = \sum_{i \ne j} \pi_i + \sum_{i \ne j} \sum_{k \in \Omega} \pi_i P_{i,k}\eta_{k,j}. \qquad (eq:2)
287+
$$
288+
289+
(The above equality lacks $j$, and we also want to design $\pi_j = 1/\mu_j$.)
290+
Remember that the expected return time:
291+
$$ \mu_{j} = 1 + \sum_{i \in \Omega} P_{j,i}\eta_{i,j}. \qquad (eq:3) $$
292+
We multiply both sides of (eq:2) by $\pi_j$ to get
293+
$$
294+
\pi_j \mu_{j} =\pi_j + \sum_{k \in \Omega} \pi_j P_{j,k}\eta_{k,j} \qquad (eq:4)
295+
$$
296+
Adding (eq:2) and (eq:4), we get
297+
$$
298+
\begin{align*}
299+
\sum_{i \in \Omega} \pi_i \eta_{i,j} + \pi_j \mu_{j} &= \sum_{i \in \Omega} \pi_i + \sum_{i \in \Omega} \sum_{k \in \Omega} \pi_i P_{i,k}\eta_{k,j} \\
300+
&= 1 + \sum_{k \in \Omega} \sum_{i \in \Omega} \pi_i P_{i,k}\eta_{k,j} \\
301+
&= 1 + \sum_{k \in \Omega} \pi_k \eta_{k,j} \qquad (\text{since} \sum_{i \in \Omega} \pi_i P_{i,k} = \pi_k) \\
302+
\end{align*}
303+
$$
304+
**Since the Markov chain is irreducible and positive recurrent, that means all states belong to a communication class and the expected return time of each state is finite. Thus, the space $\Omega$ is a finite dimensional space.** We can substract $\sum_{k \in \Omega} \pi_k \eta_{k,j}$ and $\sum_{i \in \Omega} \pi_i \eta_{i,j} $ (equal) from both sides of the above equation to get
305+
$$
306+
\pi_j \mu_{j}=1,
307+
$$
308+
which means $\pi_j = 1/\mu_j$. Similarly, we can prove that $\pi_i = 1/\mu_i$ for all $i \in \Omega$.
309+
310+
-------------
311+
312+
> Theorem 2 (Limit theorem) Consider an irreducible, aperiodic Markov chain (maybe infinite), we have $\lim\limits_{t \to \infty} P_{i,j}^{t} = \frac{1}{\mu_{j}}$. Spectially,
313+
> (1) Suppose the Markov chain is positive recurrent. Then $\lim\limits_{t \to \infty} P_{i,j}^{t} = \pi_j = \frac{1}{\mu_{j}}$.
314+
> (2) Suppose the Markov chain is null recurrent or transient. Then there is no limite probability.
315+
316+
- Three conditions for convergence to an equilibrium probability distribution: irreducibility, aperiodicity, and positive recurrence. The limit probability
317+
$$
318+
P =
319+
\begin{pmatrix}
320+
\pi_1 & \pi_2 & \cdots & \pi_N\\
321+
\pi_1 & \pi_2 & \cdots & \pi_N\\
322+
\vdots & \vdots & \ddots & \vdots\\
323+
\pi_1 & \pi_2 & \cdots & \pi_N\\
324+
\end{pmatrix}
325+
$$
326+
where each row is identical.
327+
328+
----
329+
Define $V_{i,j}^{t} = |\{ n < t | X_n = j\}|$.
330+
$V_{i,j}^{t}$ is the number of visits to state $j$ before time $t$ starting from state $i$. Then we can interpret $V_{i,j}^{t}/t$ as the proportion of time up to time $t$ spent in state $j$.
331+
332+
> Theorem 3 [Ergodic theorem] Consider an irreducible Markov chain, we have $\lim\limits_{t \to \infty} V_{i,j}^{t}/t = \frac{1}{\mu_{j}}$ **almost surely**. Spectially,
333+
> (1) Suppose the Markov chain is positive recurrent. Then $\lim\limits_{t \to \infty} V_{i,j}^{t}/t = \pi_j = \frac{1}{\mu_{j}}$ **almost surely**.
334+
> (2) Suppose the Markov chain is null recurrent or transient. Then $ V_{i,j}^{t}/t \to 0$ **almost surely** for all $j$.
335+
336+
**almost surely** means that the convergence probability of the event is 1.
337+
338+
339+
----------------
340+
341+
> Theorem 4[Detailed balance condition]. Consider a finite, irreducible, and ergodic Markov chain with transition matrix $P$. If there are nonnegative numbers $\bar{\pi} = (\pi_0, \pi_1, ..., \pi_n)$ such that $\sum_{i=0}^{n} \pi_i = 1$ and if, for any pair of states $i, j$,
342+
> $$
343+
\pi_i P_{i,j} = \pi_{j} P_{j,i},
344+
$$
345+
>then $\bar{\pi}$ is the stationary distribution corresponding to $P$.
346+
347+
Proof.
348+
$$
349+
\sum_{i} \pi_i P_{i,j} = \sum_{i}\pi_{j} P_{j,i} = \pi_{j}
350+
$$
351+
Thus, $\bar{\pi} = \bar{\pi}P$. Since this is a finite, irreducible, and ergodic Markov chain, $\bar{\pi}$ must be the unique stationary distribution of the Markov chain.
352+
353+
Remark: Theorem 2 is a sufficient but not necessary condition.
354+
355+
## Reference
356+
357+
- Mitzenmacher, M., & Upfal, E. (2005). Probability and Computing. Cambridge University Press.
358+
- [Recurrence and transience](https://mpaldridge.github.io/math2750/S09-recurrence-transience.html)
359+
- [Class structure](https://mpaldridge.github.io/math2750/S07-classes.html)
360+
- [Stationary distributions](https://mpaldridge.github.io/math2750/S10-stationary-distributions.html)
361+
- Stirzaker, David. [Elementary Probability](https://www.ctanujit.org/uploads/2/5/3/9/25393293/_elementary_probability.pdf)
13.9 KB
Loading

_posts/Markov-Chains/image.png

58.6 KB
Loading

0 commit comments

Comments
 (0)