## Dependence and Independence

### Dependence and Independence¶

Conditional distributions help us formalize our intuitive ideas about whether two random variables are independent of each other. Let $X$ and $Y$ be two random variables, and suppose we are given the value of $X$. Does that change our opinion about $Y$? If the answer is yes, then we will say that $X$ and $Y$ are dependent. If the answer is no, no matter what the given value of $X$ is, then we will say that $X$ and $Y$ are independent.

Let's start with some examples and then move to precise definitions and results.

### Dependence¶

Here is the joint distribution of two random variables $X$ and $Y$. From this, what can we say about whether $X$ and $Y$ are dependent or independent?

```
dist1
```

You can see at once that if $X = 3$ then $Y$ can only be 0, whereas if $X = 2$ then $Y$ can be either 1 or 2. Knowing the value of $X$ changes the distribution of $Y$. That's dependence.

Here are the conditional distributions of $Y$ given each of the different values of $X$. You can see that all the distributions are different, and different also from the marginal of $Y$.

```
dist1.conditional_dist('Y', 'X')
```

Here is an example in which you can't quickly determine dependence or independence by just looking at the possible values.

```
dist2
```

But you can tell by looking at the conditional distributions of $Y$ given $X$. They are different.

```
dist2.conditional_dist('Y', 'X')
```

It follows (and you should try to prove this), that at least some of the conditional distributions of $X$ given the different values of $Y$ will also be different from each other and from the marginal of $X$.

Notice that not all the conditional distributions are different. The conditional distribution of $X$ given $Y=5$ is the same as the conditional distribution of $X$ given $Y=6$. But given $Y=7$, the conditional distribution changes. $X$ and $Y$ are dependent.

```
dist2.conditional_dist('X', 'Y')
```

### Conditionals, Not Marginals¶

Often, the nature of the randomization in an experiment helps us identify dependence without calculation. For example, suppose you make two draws at random without replacement from the set $\{1, 2, 3, 4\}$. Let $X_1$ be the number on the first draw and $X_2$ the number on the second. Then $X_1$ and $X_2$ are dependent: for example, knowing that $X_1=2$ prevents $X_2$ from being equal to 2.

Here is the joint distribution of $X_1$ and $X_2$ along with the marginals. Each of the 12 possible pairs is equally likely.

```
val_X1 = [1, 2, 3, 4]
val_X2 = [1, 2, 3, 4]
def prob_wor(i, j):
if i != j:
return 1/12
else:
return 0
t_wor = Table().values('X1', val_X1, 'X2', val_X2).probability_function(prob_wor)
dist3 = t_wor.toJoint()
```

```
dist3.both_marginals()
```

It is important to notice that both the marginals are the same. $X1$ has the uniform distribution on $\{1, 2, 3, 4\}$, and *so does $X_2$*.

In a later chapter we will study these symmetries in sampling without replacement. For now, note that you can't spot dependence by just looking at the marginals. Dependence is visible in the conditionals. Given $X_1$, you can see that $X_2$ is equally likely to be any of the other three values. In the other direction, given $X_2$, you can see that $X_1$ is equally likely to be any of the other three values.

```
dist3.conditional_dist('X2', 'X1')
```

```
dist3.conditional_dist('X1', 'X2')
```

### Independence¶

If we had instead drawn the two numbers *with* replacement, then all 16 pairs would have been equally likely, and knowing the value of one of the variables would not affect your opinion about the other.

```
def prob_wr(i, j):
return 1/16
t_wr = Table().values('X1', val_X1, 'X2', val_X2).probability_function(prob_wr)
dist4 = t_wr.toJoint()
```

```
dist4
```

```
dist4.conditional_dist('X2', 'X1')
```

```
dist4.conditional_dist('X1', 'X2')
```

What you are seeing in these rather monotonous tables is independence. It doesn't matter what the value of $X_1$ is; conditional on that value, $X_2$ is still uniform on all four values. And no matter what $X_2$ is, conditional on that value $X_1$ is still uniform of all four values.

Here is a joint distribution table in which independence becomes apparent once you condition.

```
dist5
```

```
dist5.conditional_dist('Y', 'X')
```

All the conditional distributions of $Y$ given different values of $X$ are the same, and hence are the same as the marginal of $Y$ too. That's independence.

You could have drawn the same conclusion by conditioning $X$ on $Y$:

```
dist5.conditional_dist('X', 'Y')
```

### Independence of Two Random Variables¶

What we have observed in examples can be turned into a formal definition of independence.

Two random variables $X$ and $Y$ are *independent* if for every value $x$ of $X$ and $y$ of $Y$,
$$
P(Y = y \mid X = x) = P(Y = y)
$$
That is, no matter what the given $x$ is, the conditional probability of any value of $Y$ is the same as if we didn't know that $X=x$.

By the multiplication rule, the definition of independence can be written in a different way. For any values $x$ of $X$ and $y$ of $Y$,

\begin{align*} P(X = x, Y = y) &= P(X = x)P(Y = y \mid X = x) ~~~ \text{(multiplication rule)} \\ &= P(X=x)P(Y=y) ~~~~~~~~~~~~~~~~~ \text{(independence)} \end{align*}Independence simplifies the conditional probabilities in the multiplication rule.

### Independence of Two Events¶

Correspondingly, there are two equivalent definitions of the independence of two events. The first encapsulates the main idea of independence, and the second is useful for calculation.

Two events $A$ and $B$ are *independent* if $P(B \mid A) = P(B)$. Equivalently, $A$ and $B$ are independent if $P(AB) = P(A)P(B)$.

It is a fact that if $X$ and $Y$ are independent random variables, then any event determined by $X$ is independent of any event determined by $Y$. For example, if $X$ and $Y$ are independent and $x$ is a number, then $\{X=x\}$ is independent of $\{Y>x\}$. Also, any function of $X$ is independent of any function of $Y$.

You can prove these facts by partitioning and then using the definition of independence. The proofs are routine but somewhat labor intensive. You are welcome to just accept the facts if you don't want to prove them.

### "i.i.d." Random Variables¶

If two random variables $X$ and $Y$ are independent and identically distributed, they are called "i.i.d." That's one of the most famous acronyms in probability theory. You can think of i.i.d. random variables as draws with replacement from a population, or as the results of independent replications of the same experiment.

Suppose the distribution of $X$ is given by $$ P(X = i) = p_i, ~~~ i = 1, 2, \ldots, n $$ where $\sum_{i=1}^n p_i = 1$. Now let $X$ and $Y$ be i.i.d. What is $P(X = Y)$? We'll answer this question by using the fundamental method, now in random variable notation.

\begin{align*} P(X = Y) &= \sum_{i=1}^n P(X = i, Y = i) ~~~ \text{(partitioning)} \\ &= \sum_{i=1}^n P(X = i)P(Y = i) ~~~ \text{(independence)} \\ &= \sum_{i=1}^n p_i \cdot p_i ~~~ \text{(identical distributions)} \\ &= \sum_{i=1}^n p_i^2 \end{align*}The last expression is easy to calculate if you know the numerical values of all the $p_i$.

In the same way,

\begin{align*} P(Y > X) &= \sum_{i=1}^n P(X = i, Y > i) \\ &= \sum_{i=1}^n P(X = i)P(Y > i) \\ &= \sum_{i=1}^{n-1} P(X = i)P(Y > i) \end{align*}because $P(Y > n) = 0$. Now for each $i < n$, $P(Y > i) = P(X > i) = \sum_{j=i+1}^n p_j$. Call this sum $b_i$ for "bigger than $i$". Then

\begin{align*} P(Y > X) &= \sum_{i=1}^{n-1} P(X = i)P(Y > i) \\ &= \sum_{i=1}^{n-1} p_ib_i \end{align*}This is also a straightforward calculation if you know all the $p_i$. For $n=4$ it boils down to $$ p_1 \cdot(p_2 + p_3 + p_4) ~~ + ~~ p_2 \cdot (p_3 + p_4) ~~ + ~~ p_3 \cdot p_4 $$

```
```