## Periodic Chains

### Periodic Chains¶

Among irreducible Markov Chains with finite state space, aperiodic chains have all the beautiful properties that we have studied in the past several sections. But many interesting chains are periodic. In this section we will note which of our familiar results hold and which don't, and how we can use the familiar results in the periodic case.

Let $X_0, X_1, X_2, \ldots $ be an irreducible Markov Chain with a finite state space, and suppose the chain has period $d > 1$. Remember that because the chain is irreducible, all the states have the same period, so you can find $d$ by just looking at the return times of a single state.

Then the following facts still hold, just as in the aperiodic case. We won't prove them but you can check some of them by simulation.

There is a unique strictly positive probability distribution $\pi$ that solves the balance equations.

If the chain starts with initial distribution $\pi$, then each $X_n$ has distribution $\pi$; that is, $P_n = \pi$ for all $n$. So $\pi$ is a steady state distribution.

For any state $i$, the expected return time to $i$ is $E(W_i \mid X_0 = i) = \frac{1}{\pi(i)}$.

For any two states $i$ and $j$, the expected number of visits to $j$ in an $i$-block is $\mu(i, j) = \frac{\pi(j)}{\pi(i)}$.

What doesn't hold is the simple statement about convergence to stationarity that we were able to make in the aperiodic case. In the periodic case we have to be careful about taking limits along appropriate subsequences, which we will avoid.

Here is an example to show that you can find expected hitting times and occupation times for periodic chains just as you did for aperiodic chains.

### Random Walk with Reflection¶

Consider a gambler who gambles on tosses of a fair coin: every time the coin lands heads she gains a dollar, and every time it lands tails she loses a dollar. When her fortune reaches $\$0$, a benefactor hands her $\$1$ so she can keep playing. When her fortune reaches $\$N$ for some fixed positive integer $N$, she has to give $\$1$ to charity but she can keep playing with the remaining $\$(N-1)$.

Let $X_n$ represent her fortune at time $n$. Then $X_0, X_1, X_2, \ldots $ is a Markov Chain whose transition probabilities are given by:

$P(0, 1) = 1 = P(N, N-1)$

For $1 \le i \le N-1$, $P(i, i-1) = 1/2 = P(i, i+1)$

We say that 0 and $N$ are *reflecting barriers*.

This chain is clearly irreducible, and it has period 2. To check the period, let's look at returns to the state 1. By the way the transitions have been defined, if the chain starts at 1 then it can return to 1 only at even times.

Here is the transition matrix in the case $N=5$.

```
N = 5
s = np.arange(N+1)
def trans(i, j):
if i == 0:
if j == 1:
return 1
else: return 0
elif i== N:
if j == N-1:
return 1
else:
return 0
elif abs(i-j) == 1:
return 0.5
else:
return 0
refl = Table().states(s).transition_function(trans).toMarkovChain()
```

```
refl
```

The steady state distribution has a simple form:

```
refl.steady_state()
```

The steady state distribution of this reflecting random walk is straightforward to derive for any $N$. The balance equations are:

\begin{align*} \pi(0) &= 0.5\pi(1) \\ \pi(1) &= \pi(0) + 0.5\pi(2) \\ \pi(2) &= 0.5\pi(1) + 0.5\pi(3) \end{align*}and so on, ending with

\begin{align*} \pi(N-1) &= 0.5\pi(N-2) + \pi(N) \\ \pi(N) &= 0.5\pi(N-1) \end{align*}By symmetry,

- $\pi(0) = \pi(N)$
- $\pi(1) = \pi(N-1)$
- $\pi(2) = \pi(3) = \cdots = \pi(N-2)$

Put this together with the first two balance equations to see that the equations are satisfied by

$$ \big{(} \pi(0), 2\pi(0), 2\pi(0), \ldots, 2\pi(0), \pi(0) \big{)} $$That's a total of $[2+2(N+1-2)]\pi(0) = 2N\pi(0)$.

So the probability distribution that solves the balance equations has $N+1$ terms corresponding to the states 0 through $N$ and is given by

$$ \pi = \big{(} \frac{1}{2N}, \frac{1}{N}, \frac{1}{N}, \ldots, \frac{1}{N}, \frac{1}{2N} \big{)} $$Thus for example:

- The long run expected proportion of time the chain spends at each of the reflecting barriers is $1/2N$, which is half the long run expected proportion of time spent at any of the other states.
- Given that the chain starts at 0, the expected time till it returns to 0 is $2N$.
- Given that the chain starts at 0, the expected number of times it vistis 1 before returning to 0 is $(1/N)/(1/2N) = 2$. This is the same as the expected number of times the chain visits any of the states 1 through $N-1$ before returning to 0.
- Given that the chain starts at 0, the expected number of times it visits $N$ before returning to 0 is 1.
- Given that the chain starts at 1, the expected time till it returns to 1 is $N$.
- Given that the chain starts at 1, the expected number of times it visits 2 before returning to 1 is 1, as is the expected number of visits to any of the states 2 through $N-1$ before returning to 1.
- Given that the chain starts at 1, the expected number of times it visits 0 before returning to 1 is 1/2, as is the expected number of visits to state $N$ before returning to 1.

```
```