Normally distributed and uncorrelated does not imply independent

In probability theory, linear uncorrelatedness of two random variables does not in general imply their independence, but it is sometimes mistakenly thought that it does imply that when the two random variables are normally distributed.[1][2] The assumption of normal distributions for the random variables does not have that consequence, although the multivariate normal distribution, including the bivariate normal distribution, does.

To say that the pair ( X , Y ) {\displaystyle (X,Y)} of random variables has a bivariate normal distribution means that every linear combination a X + b Y {\displaystyle aX+bY} of X {\displaystyle X} and Y {\displaystyle Y} for constant (i.e. not random) coefficients a {\displaystyle a} and b {\displaystyle b} (not both equal to zero) has a univariate normal distribution. In that case, if X {\displaystyle X} and Y {\displaystyle Y} are uncorrelated then they are independent.[3] However, it is possible for two random variables X {\displaystyle X} and Y {\displaystyle Y} to be so distributed jointly that each one alone is marginally normally distributed, and they are uncorrelated, but they are not independent; examples are given below.

Examples

A symmetric example

Two normally distributed, uncorrelated but dependent variables.
Joint range of X {\displaystyle X} and Y {\displaystyle Y} . Darker indicates higher value of the density function.

Suppose X {\displaystyle X} has a normal distribution with expected value 0 and variance 1. Let W {\displaystyle W} have the Rademacher distribution, so that W = 1 {\displaystyle W=1} or W = 1 {\displaystyle W=-1} , each with probability 1/2, and assume W {\displaystyle W} is independent of X {\displaystyle X} . Let Y = W X {\displaystyle Y=WX} . Then X {\displaystyle X} and Y {\displaystyle Y} are uncorrelated, as can be verified by calculating their covariance. Moreover, both have the same normal distribution. And yet, X {\displaystyle X} and Y {\displaystyle Y} are not independent.[4][1][5]

To see that X {\displaystyle X} and Y {\displaystyle Y} are not independent, observe that | Y | = | X | {\displaystyle |Y|=|X|} or that Pr ( Y > 1 | 1 / 2 < X < 1 / 2 ) = Pr ( X > 1 | 1 / 2 < X < 1 / 2 ) = 0 {\displaystyle \operatorname {Pr} (Y>1|-1/2<X<1/2)=\operatorname {Pr} (X>1|-1/2<X<1/2)=0} .

Finally, the distribution of the simple linear combination X + Y {\displaystyle X+Y} concentrates positive probability at 0: Pr ( X + Y = 0 ) = 1 / 2 {\displaystyle \operatorname {Pr} (X+Y=0)=1/2} . Therefore, the random variable X + Y {\displaystyle X+Y} is not normally distributed, and so also X {\displaystyle X} and Y {\displaystyle Y} are not jointly normally distributed (by the definition above).[4]

An asymmetric example

The joint density of X {\displaystyle X} and Y {\displaystyle Y} . Darker indicates a higher value of the density.

Suppose X {\displaystyle X} has a normal distribution with expected value 0 and variance 1. Let

Y = { X if  | X | c X if  | X | > c {\displaystyle Y=\left\{{\begin{matrix}X&{\text{if }}\left|X\right|\leq c\\-X&{\text{if }}\left|X\right|>c\end{matrix}}\right.}
where c {\displaystyle c} is a positive number to be specified below. If c {\displaystyle c} is very small, then the correlation corr ( X , Y ) {\displaystyle \operatorname {corr} (X,Y)} is near 1 {\displaystyle -1} if c {\displaystyle c} is very large, then corr ( X , Y ) {\displaystyle \operatorname {corr} (X,Y)} is near 1. Since the correlation is a continuous function of c {\displaystyle c} , the intermediate value theorem implies there is some particular value of c {\displaystyle c} that makes the correlation 0. That value is approximately 1.54.[2][note 1] In that case, X {\displaystyle X} and Y {\displaystyle Y} are uncorrelated, but they are clearly not independent, since X {\displaystyle X} completely determines Y {\displaystyle Y} .

To see that Y {\displaystyle Y} is normally distributed—indeed, that its distribution is the same as that of X {\displaystyle X} —one may compute its cumulative distribution function:[6]

Pr ( Y x ) = Pr ( { | X | c  and  X x }  or  { | X | > c  and  X x } ) = Pr ( | X | c  and  X x ) + Pr ( | X | > c  and  X x ) = Pr ( | X | c  and  X x ) + Pr ( | X | > c  and  X x ) = Pr ( X x ) , {\displaystyle {\begin{aligned}\Pr(Y\leq x)&=\Pr(\{|X|\leq c{\text{ and }}X\leq x\}{\text{ or }}\{|X|>c{\text{ and }}-X\leq x\})\\&=\Pr(|X|\leq c{\text{ and }}X\leq x)+\Pr(|X|>c{\text{ and }}-X\leq x)\\&=\Pr(|X|\leq c{\text{ and }}X\leq x)+\Pr(|X|>c{\text{ and }}X\leq x)\\&=\Pr(X\leq x),\end{aligned}}}

where the next-to-last equality follows from the symmetry of the distribution of X {\displaystyle X} and the symmetry of the condition that | X | c {\displaystyle |X|\leq c} .

In this example, the difference X Y {\displaystyle X-Y} is nowhere near being normally distributed, since it has a substantial probability (about 0.88) of it being equal to 0. By contrast, the normal distribution, being a continuous distribution, has no discrete part—that is, it does not concentrate more than zero probability at any single point. Consequently X {\displaystyle X} and Y {\displaystyle Y} are not jointly normally distributed, even though they are separately normally distributed.[2]

Examples with support almost everywhere in R 2 {\displaystyle \mathbb {R} ^{2}}

Suppose that the coordinates ( X , Y ) {\displaystyle (X,Y)} of a random point in the plane are chosen according to the probability density function

p ( x , y ) = 1 2 π 3 [ exp ( 2 3 ( x 2 + x y + y 2 ) ) + exp ( 2 3 ( x 2 x y + y 2 ) ) ] . {\displaystyle p(x,y)={\frac {1}{2\pi {\sqrt {3}}}}\left[\exp \left(-{\frac {2}{3}}(x^{2}+xy+y^{2})\right)+\exp \left(-{\frac {2}{3}}(x^{2}-xy+y^{2})\right)\right].}
Then the random variables X {\displaystyle X} and Y {\displaystyle Y} are uncorrelated, and each of them is normally distributed (with mean 0 and variance 1), but they are not independent.[7]: 93 

It is well-known that the ratio C {\displaystyle C} of two independent standard normal random deviates X i {\displaystyle X_{i}} and Y i {\displaystyle Y_{i}} has a Cauchy distribution.[8][9][7]: 122  One can equally well start with the Cauchy random variable C {\displaystyle C} and derive the conditional distribution of Y i {\displaystyle Y_{i}} to satisfy the requirement that X i = C Y i {\displaystyle X_{i}=CY_{i}} with X i {\displaystyle X_{i}} and Y i {\displaystyle Y_{i}} independent and standard normal. It follows that

Y i = W i χ i 2 ( k = 2 ) 1 + C 2 {\displaystyle Y_{i}=W_{i}{\sqrt {\frac {\chi _{i}^{2}\left(k=2\right)}{1+C^{2}}}}}
in which W i {\displaystyle W_{i}} is a Rademacher random variable and χ i 2 ( k = 2 ) {\displaystyle \chi _{i}^{2}\left(k=2\right)} is a Chi-squared random variable with two degrees of freedom.

Consider two sets of ( X i , Y i ) {\displaystyle \left(X_{i},Y_{i}\right)} , i { 1 , 2 } {\displaystyle i\in \left\{1,2\right\}} . Note that C {\displaystyle C} is not indexed by i {\displaystyle i} – that is, the same Cauchy random variable C {\displaystyle C} is used in the definition of both ( X 1 , Y 1 ) {\displaystyle \left(X_{1},Y_{1}\right)} and ( X 2 , Y 2 ) {\displaystyle \left(X_{2},Y_{2}\right)} . This sharing of C {\displaystyle C} results in dependences across indices: neither X 1 {\displaystyle X_{1}} nor Y 1 {\displaystyle Y_{1}} is independent of Y 2 {\displaystyle Y_{2}} . Nevertheless all of the X i {\displaystyle X_{i}} and Y i {\displaystyle Y_{i}} are uncorrelated as the bivariate distributions all have reflection symmetry across the axes.[citation needed]

Non-normal joint distributions with normal marginals.

The figure shows scatterplots of samples drawn from the above distribution. This furnishes two examples of bivariate distributions that are uncorrelated and have normal marginal distributions but are not independent. The left panel shows the joint distribution of X 1 {\displaystyle X_{1}} and Y 2 {\displaystyle Y_{2}} ; the distribution has support everywhere but at the origin. The right panel shows the joint distribution of Y 1 {\displaystyle Y_{1}} and Y 2 {\displaystyle Y_{2}} ; the distribution has support everywhere except along the axes and has a discontinuity at the origin: the density diverges when the origin is approached along any straight path except along the axes.

See also

  • Correlation and dependence

References

  1. ^ a b Rosenthal, Jeffrey S. (2005). "A Rant About Uncorrelated Normal Random Variables".
  2. ^ a b c Melnick, Edward L.; Tenenbein, Aaron (November 1982). "Misspecifications of the Normal Distribution". The American Statistician. 36 (4): 372–373. doi:10.1080/00031305.1982.10483052.
  3. ^ Hogg, Robert; Tanis, Elliot (2001). "Chapter 5.4 The Bivariate Normal Distribution". Probability and Statistical Inference (6th ed.). Prentice Hall. pp. 258–259. ISBN 0130272949.
  4. ^ a b Ash, Robert B. "Lecture 21. The Multivariate Normal Distribution" (PDF). Lectures on Statistics. Archived from the original (PDF) on 2007-07-14.
  5. ^ Romano, Joesph P.; Siegel, Andrew F. (1986). Counterexamples in Probability and Statistics. Wadsworth & Brooks/Cole. pp. 65–66. ISBN 0-534-05568-0.
  6. ^ Wise, Gary L.; Hall, Eric B. (1993). Counterexamples in Probability and Real Analysis. Oxford University Press. pp. 140–141. ISBN 0-19-507068-2.
  7. ^ a b Stoyanov, Jordan M. (2013). Counterexamples in Probability (3rd ed.). Dover. ISBN 978-0-486-49998-7.
  8. ^ Patel, Jagdish K.; Read, Campbell B. (1996). Handbook of the Normal Distribution (2nd ed.). Taylor and Francis. p. 113. ISBN 978-0-824-79342-5.
  9. ^ Krishnamoorthy, K. (2006). Handbook of Statistical Distributions with Applications. CRC Press. p. 278. ISBN 978-1-420-01137-1.
Notes
  1. ^ More precisely 1.53817..., the square root of the median of a chi-squared distribution with 3 degrees of freedom.