Cochran's theorem

In statistics, Cochran's theorem, devised by William G. Cochran,[1] is a theorem used to justify results relating to the probability distributions of statistics that are used in the analysis of variance.[2]

Examples

Sample mean and sample variance

If X1, ..., Xn are independent normally distributed random variables with mean μ and standard deviation σ then

U i = X i μ σ {\displaystyle U_{i}={\frac {X_{i}-\mu }{\sigma }}}

is standard normal for each i. Note that the total Q is equal to sum of squared Us as shown here:

i Q i = j i k U j B j k ( i ) U k = j k U j U k i B j k ( i ) = j k U j U k δ j k = j U j 2 {\displaystyle \sum _{i}Q_{i}=\sum _{jik}U_{j}B_{jk}^{(i)}U_{k}=\sum _{jk}U_{j}U_{k}\sum _{i}B_{jk}^{(i)}=\sum _{jk}U_{j}U_{k}\delta _{jk}=\sum _{j}U_{j}^{2}}

which stems from the original assumption that B 1 + B 2 = I {\displaystyle B_{1}+B_{2}\ldots =I} . So instead we will calculate this quantity and later separate it into Qi's. It is possible to write

i = 1 n U i 2 = i = 1 n ( X i X ¯ σ ) 2 + n ( X ¯ μ σ ) 2 {\displaystyle \sum _{i=1}^{n}U_{i}^{2}=\sum _{i=1}^{n}\left({\frac {X_{i}-{\overline {X}}}{\sigma }}\right)^{2}+n\left({\frac {{\overline {X}}-\mu }{\sigma }}\right)^{2}}

(here X ¯ {\displaystyle {\overline {X}}} is the sample mean). To see this identity, multiply throughout by σ 2 {\displaystyle \sigma ^{2}} and note that

( X i μ ) 2 = ( X i X ¯ + X ¯ μ ) 2 {\displaystyle \sum (X_{i}-\mu )^{2}=\sum (X_{i}-{\overline {X}}+{\overline {X}}-\mu )^{2}}

and expand to give

( X i μ ) 2 = ( X i X ¯ ) 2 + ( X ¯ μ ) 2 + 2 ( X i X ¯ ) ( X ¯ μ ) . {\displaystyle \sum (X_{i}-\mu )^{2}=\sum (X_{i}-{\overline {X}})^{2}+\sum ({\overline {X}}-\mu )^{2}+2\sum (X_{i}-{\overline {X}})({\overline {X}}-\mu ).}

The third term is zero because it is equal to a constant times

( X ¯ X i ) = 0 , {\displaystyle \sum ({\overline {X}}-X_{i})=0,}

and the second term has just n identical terms added together. Thus

( X i μ ) 2 = ( X i X ¯ ) 2 + n ( X ¯ μ ) 2 , {\displaystyle \sum (X_{i}-\mu )^{2}=\sum (X_{i}-{\overline {X}})^{2}+n({\overline {X}}-\mu )^{2},}

and hence

( X i μ σ ) 2 = ( X i X ¯ σ ) 2 + n ( X ¯ μ σ ) 2 = i ( U i 1 n j U j ) 2 Q 1 + 1 n ( j U j ) 2 Q 2 = Q 1 + Q 2 . {\displaystyle \sum \left({\frac {X_{i}-\mu }{\sigma }}\right)^{2}=\sum \left({\frac {X_{i}-{\overline {X}}}{\sigma }}\right)^{2}+n\left({\frac {{\overline {X}}-\mu }{\sigma }}\right)^{2}=\overbrace {\sum _{i}\left(U_{i}-{\frac {1}{n}}\sum _{j}{U_{j}}\right)^{2}} ^{Q_{1}}+\overbrace {{\frac {1}{n}}\left(\sum _{j}{U_{j}}\right)^{2}} ^{Q_{2}}=Q_{1}+Q_{2}.}

Now B ( 2 ) = J n n {\displaystyle B^{(2)}={\frac {J_{n}}{n}}} with J n {\displaystyle J_{n}} the matrix of ones which has rank 1. In turn B ( 1 ) = I n J n n {\displaystyle B^{(1)}=I_{n}-{\frac {J_{n}}{n}}} given that I n = B ( 1 ) + B ( 2 ) {\displaystyle I_{n}=B^{(1)}+B^{(2)}} . This expression can be also obtained by expanding Q 1 {\displaystyle Q_{1}} in matrix notation. It can be shown that the rank of B ( 1 ) {\displaystyle B^{(1)}} is n 1 {\displaystyle n-1} as the addition of all its rows is equal to zero. Thus the conditions for Cochran's theorem are met.

Cochran's theorem then states that Q1 and Q2 are independent, with chi-squared distributions with n − 1 and 1 degree of freedom respectively. This shows that the sample mean and sample variance are independent. This can also be shown by Basu's theorem, and in fact this property characterizes the normal distribution – for no other distribution are the sample mean and sample variance independent.[3]

Distributions

The result for the distributions is written symbolically as

( X i X ¯ ) 2 σ 2 χ n 1 2 . {\displaystyle \sum \left(X_{i}-{\overline {X}}\right)^{2}\sim \sigma ^{2}\chi _{n-1}^{2}.}
n ( X ¯ μ ) 2 σ 2 χ 1 2 , {\displaystyle n({\overline {X}}-\mu )^{2}\sim \sigma ^{2}\chi _{1}^{2},}

Both these random variables are proportional to the true but unknown variance σ2. Thus their ratio does not depend on σ2 and, because they are statistically independent. The distribution of their ratio is given by

n ( X ¯ μ ) 2 1 n 1 ( X i X ¯ ) 2 χ 1 2 1 n 1 χ n 1 2 F 1 , n 1 {\displaystyle {\frac {n\left({\overline {X}}-\mu \right)^{2}}{{\frac {1}{n-1}}\sum \left(X_{i}-{\overline {X}}\right)^{2}}}\sim {\frac {\chi _{1}^{2}}{{\frac {1}{n-1}}\chi _{n-1}^{2}}}\sim F_{1,n-1}}

where F1,n − 1 is the F-distribution with 1 and n − 1 degrees of freedom (see also Student's t-distribution). The final step here is effectively the definition of a random variable having the F-distribution.

Estimation of variance

To estimate the variance σ2, one estimator that is sometimes used is the maximum likelihood estimator of the variance of a normal distribution

σ ^ 2 = 1 n ( X i X ¯ ) 2 . {\displaystyle {\widehat {\sigma }}^{2}={\frac {1}{n}}\sum \left(X_{i}-{\overline {X}}\right)^{2}.}

Cochran's theorem shows that

n σ ^ 2 σ 2 χ n 1 2 {\displaystyle {\frac {n{\widehat {\sigma }}^{2}}{\sigma ^{2}}}\sim \chi _{n-1}^{2}}

and the properties of the chi-squared distribution show that

E ( n σ ^ 2 σ 2 ) = E ( χ n 1 2 ) n σ 2 E ( σ ^ 2 ) = ( n 1 ) E ( σ ^ 2 ) = σ 2 ( n 1 ) n {\displaystyle {\begin{aligned}E\left({\frac {n{\widehat {\sigma }}^{2}}{\sigma ^{2}}}\right)&=E\left(\chi _{n-1}^{2}\right)\\{\frac {n}{\sigma ^{2}}}E\left({\widehat {\sigma }}^{2}\right)&=(n-1)\\E\left({\widehat {\sigma }}^{2}\right)&={\frac {\sigma ^{2}(n-1)}{n}}\end{aligned}}}

Alternative formulation

The following version is often seen when considering linear regression.[4] Suppose that Y N n ( 0 , σ 2 I n ) {\displaystyle Y\sim N_{n}(0,\sigma ^{2}I_{n})} is a standard multivariate normal random vector (here I n {\displaystyle I_{n}} denotes the n-by-n identity matrix), and if A 1 , , A k {\displaystyle A_{1},\ldots ,A_{k}} are all n-by-n symmetric matrices with i = 1 k A i = I n {\displaystyle \sum _{i=1}^{k}A_{i}=I_{n}} . Then, on defining r i = Rank ( A i ) {\displaystyle r_{i}=\operatorname {Rank} (A_{i})} , any one of the following conditions implies the other two:

  • i = 1 k r i = n , {\displaystyle \sum _{i=1}^{k}r_{i}=n,}
  • Y T A i Y σ 2 χ r i 2 {\displaystyle Y^{T}A_{i}Y\sim \sigma ^{2}\chi _{r_{i}}^{2}} (thus the A i {\displaystyle A_{i}} are positive semidefinite)
  • Y T A i Y {\displaystyle Y^{T}A_{i}Y} is independent of Y T A j Y {\displaystyle Y^{T}A_{j}Y} for i j . {\displaystyle i\neq j.}


Statement

Let U1, ..., UN be i.i.d. standard normally distributed random variables, and U = [ U 1 , . . . , U N ] T {\displaystyle U=[U_{1},...,U_{N}]^{T}} . Let B ( 1 ) , B ( 2 ) , , B ( k ) {\displaystyle B^{(1)},B^{(2)},\ldots ,B^{(k)}} be symmetric matrices. Define ri to be the rank of B ( i ) {\displaystyle B^{(i)}} . Define Q i = U T B ( i ) U {\displaystyle Q_{i}=U^{T}B^{(i)}U} , so that the Qi are quadratic forms. Further assume i Q i = U T U {\displaystyle \sum _{i}Q_{i}=U^{T}U} .

Cochran's theorem states that the following are equivalent:

  • r 1 + + r k = N {\displaystyle r_{1}+\cdots +r_{k}=N} ,
  • the Qi are independent
  • each Qi has a chi-squared distribution with ri degrees of freedom.[1][5]

Often it's stated as i A i = A {\displaystyle \sum _{i}A_{i}=A} , where A {\displaystyle A} is idempotent, and i r i = N {\displaystyle \sum _{i}r_{i}=N} is replaced by i r i = r a n k ( A ) {\displaystyle \sum _{i}r_{i}=rank(A)} . But after an orthogonal transform, A = d i a g ( I M , 0 ) {\displaystyle A=diag(I_{M},0)} , and so we reduce to the above theorem.

Proof

Claim: Let X {\displaystyle X} be a standard Gaussian in R n {\displaystyle \mathbb {R} ^{n}} , then for any symmetric matrices Q , Q {\displaystyle Q,Q'} , if X T Q X {\displaystyle X^{T}QX} and X T Q X {\displaystyle X^{T}Q'X} have the same distribution, then Q , Q {\displaystyle Q,Q'} have the same eigenvalues (up to multiplicity).

Proof

Let the eigenvalues of Q {\displaystyle Q} be λ 1 , . . . , λ n {\displaystyle \lambda _{1},...,\lambda _{n}} , then calculate the characteristic function of X T Q X {\displaystyle X^{T}QX} . It comes out to be

ϕ ( t ) = ( j ( 1 2 i λ j t ) ) 1 / 2 {\displaystyle \phi (t)=\left(\prod _{j}(1-2i\lambda _{j}t)\right)^{-1/2}}

(To calculate it, first diagonalize Q {\displaystyle Q} , change into that frame, then use the fact that the characteristic function of the sum of independent variables is the product of their characteristic functions.)

For X T Q X {\displaystyle X^{T}QX} and X T Q X {\displaystyle X^{T}Q'X} to be equal, their characteristic functions must be equal, so Q , Q {\displaystyle Q,Q'} have the same eigenvalues (up to multiplicity).

Claim: I = i B i {\displaystyle I=\sum _{i}B_{i}} .

Proof

U T ( I i B i ) U = 0 {\displaystyle U^{T}(I-\sum _{i}B_{i})U=0} . Since ( I i B i ) {\displaystyle (I-\sum _{i}B_{i})} is symmetric, and U T ( I i B i ) U = d U T 0 U {\displaystyle U^{T}(I-\sum _{i}B_{i})U=^{d}U^{T}0U} , by the previous claim, ( I i B i ) {\displaystyle (I-\sum _{i}B_{i})} has the same eigenvalues as 0.

Lemma: If i M i = I {\displaystyle \sum _{i}M_{i}=I} , all M i {\displaystyle M_{i}} symmetric, and have eigenvalues 0, 1, then they are simultaneously diagonalizable.

Proof

Fix i, and consider the eigenvectors v of M i {\displaystyle M_{i}} such that M i v = v {\displaystyle M_{i}v=v} . Then we have v T v = v T I v = v T v + j i v T M j v {\displaystyle v^{T}v=v^{T}Iv=v^{T}v+\sum _{j\neq i}v^{T}M_{j}v} , so all v T M j v = 0 {\displaystyle v^{T}M_{j}v=0} . Thus we obtain a split of R N {\displaystyle \mathbb {R} ^{N}} into V V {\displaystyle V\oplus V^{\perp }} , such that V is the 1-eigenspace of M i {\displaystyle M_{i}} , and in the 0-eigenspaces of all other M j {\displaystyle M_{j}} . Now induct by moving into V {\displaystyle V^{\perp }} .

Now we prove the original theorem. We prove that the three cases are equivalent by proving that each case implies the next one in a cycle ( 1 2 3 1 {\displaystyle 1\to 2\to 3\to 1} ).

Proof

Case: All Q i {\displaystyle Q_{i}} are independent

Fix some i {\displaystyle i} , define C i = I B i = j i B j {\displaystyle C_{i}=I-B_{i}=\sum _{j\neq i}B_{j}} , and diagonalize B i {\displaystyle B_{i}} by an orthogonal transform O {\displaystyle O} . Then consider O C i O T = I O B i O T {\displaystyle OC_{i}O^{T}=I-OB_{i}O^{T}} . It is diagonalized as well.

Let W = O U {\displaystyle W=OU} , then it is also standard Gaussian. Then we have

Q i = W T ( O B i O T ) W ; j i Q j = W T ( I O B i O T ) W {\displaystyle Q_{i}=W^{T}(OB_{i}O^{T})W;\quad \sum _{j\neq i}Q_{j}=W^{T}(I-OB_{i}O^{T})W}

Inspect their diagonal entries, to see that Q i j i Q j {\displaystyle Q_{i}\perp \sum _{j\neq i}Q_{j}} implies that their nonzero diagonal entries are disjoint.

Thus all eigenvalues of B i {\displaystyle B_{i}} are 0, 1, so Q i {\displaystyle Q_{i}} is a χ 2 {\displaystyle \chi ^{2}} dist with r i {\displaystyle r_{i}} degrees of freedom.

Case: Each Q i {\displaystyle Q_{i}} is a χ 2 ( r i ) {\displaystyle \chi ^{2}(r_{i})} distribution.

Fix any i {\displaystyle i} , diagonalize it by orthogonal transform O {\displaystyle O} , and reindex, so that O B i O T = d i a g ( λ 1 , . . . , λ r i , 0 , . . . , 0 ) {\displaystyle OB_{i}O^{T}=diag(\lambda _{1},...,\lambda _{r_{i}},0,...,0)} . Then Q i = j λ j U j 2 {\displaystyle Q_{i}=\sum _{j}\lambda _{j}{U'}_{j}^{2}} for some U j {\displaystyle U'_{j}} , a spherical rotation of U i {\displaystyle U_{i}} .

Since Q i χ 2 ( r i ) {\displaystyle Q_{i}\sim \chi ^{2}(r_{i})} , we get all λ j = 1 {\displaystyle \lambda _{j}=1} . So all B i 0 {\displaystyle B_{i}\succeq 0} , and have eigenvalues 0 , 1 {\displaystyle 0,1} .

So diagonalize them simultaneously, add them up, to find i r i = N {\displaystyle \sum _{i}r_{i}=N} .

Case: r 1 + + r k = N {\displaystyle r_{1}+\cdots +r_{k}=N} .

We first show that the matrices B(i) can be simultaneously diagonalized by an orthogonal matrix and that their non-zero eigenvalues are all equal to +1. Once that's shown, take this orthogonal transform to this simultaneous eigenbasis, in which the random vector [ U 1 , . . . , U N ] T {\displaystyle [U_{1},...,U_{N}]^{T}} becomes [ U 1 , . . . , U N ] T {\displaystyle [U'_{1},...,U'_{N}]^{T}} , but all U i {\displaystyle U_{i}'} are still independent and standard Gaussian. Then the result follows.

Each of the matrices B(i) has rank ri and thus ri non-zero eigenvalues. For each i, the sum C ( i ) j i B ( j ) {\displaystyle C^{(i)}\equiv \sum _{j\neq i}B^{(j)}} has at most rank j i r j = N r i {\displaystyle \sum _{j\neq i}r_{j}=N-r_{i}} . Since B ( i ) + C ( i ) = I N × N {\displaystyle B^{(i)}+C^{(i)}=I_{N\times N}} , it follows that C(i) has exactly rank N − ri.

Therefore B(i) and C(i) can be simultaneously diagonalized. This can be shown by first diagonalizing B(i), by the spectral theorem. In this basis, it is of the form:

[ λ 1 0 0 0 0 λ 2 0 0 0 0 λ r i 0 0 0 0 0 ] . {\displaystyle {\begin{bmatrix}\lambda _{1}&0&0&\cdots &\cdots &&0\\0&\lambda _{2}&0&\cdots &\cdots &&0\\0&0&\ddots &&&&\vdots \\\vdots &\vdots &&\lambda _{r_{i}}&&\\\vdots &\vdots &&&0&\\0&\vdots &&&&\ddots \\0&0&\ldots &&&&0\end{bmatrix}}.}

Thus the lower ( N r i ) {\displaystyle (N-r_{i})} rows are zero. Since C ( i ) = I B ( i ) {\displaystyle C^{(i)}=I-B^{(i)}} , it follows that these rows in C(i) in this basis contain a right block which is a ( N r i ) × ( N r i ) {\displaystyle (N-r_{i})\times (N-r_{i})} unit matrix, with zeros in the rest of these rows. But since C(i) has rank N − ri, it must be zero elsewhere. Thus it is diagonal in this basis as well. It follows that all the non-zero eigenvalues of both B(i) and C(i) are +1. This argument applies for all i, thus all B(i) are positive semidefinite.

Moreover, the above analysis can be repeated in the diagonal basis for C ( 1 ) = B ( 2 ) + j > 2 B ( j ) {\displaystyle C^{(1)}=B^{(2)}+\sum _{j>2}B^{(j)}} . In this basis C ( 1 ) {\displaystyle C^{(1)}} is the identity of an ( N r 1 ) × ( N r 1 ) {\displaystyle (N-r_{1})\times (N-r_{1})} vector space, so it follows that both B(2) and j > 2 B ( j ) {\displaystyle \sum _{j>2}B^{(j)}} are simultaneously diagonalizable in this vector space (and hence also together with B(1)). By iteration it follows that all B-s are simultaneously diagonalizable.

Thus there exists an orthogonal matrix S {\displaystyle S} such that for all i {\displaystyle i} , S T B ( i ) S B ( i ) {\displaystyle S^{\mathrm {T} }B^{(i)}S\equiv B^{(i)\prime }} is diagonal, where any entry B x , y ( i ) {\displaystyle B_{x,y}^{(i)\prime }} with indices x = y {\displaystyle x=y} , j = 1 i 1 r j < x = y j = 1 i r j {\displaystyle \sum _{j=1}^{i-1}r_{j}<x=y\leq \sum _{j=1}^{i}r_{j}} , is equal to 1, while any entry with other indices is equal to 0.


See also

  • Cramér's theorem, on decomposing normal distribution
  • Infinite divisibility (probability)

References

  1. ^ a b Cochran, W. G. (April 1934). "The distribution of quadratic forms in a normal system, with applications to the analysis of covariance". Mathematical Proceedings of the Cambridge Philosophical Society. 30 (2): 178–191. doi:10.1017/S0305004100016595.
  2. ^ Bapat, R. B. (2000). Linear Algebra and Linear Models (Second ed.). Springer. ISBN 978-0-387-98871-9.
  3. ^ Geary, R.C. (1936). "The Distribution of "Student's" Ratio for Non-Normal Samples". Supplement to the Journal of the Royal Statistical Society. 3 (2): 178–184. doi:10.2307/2983669. JFM 63.1090.03. JSTOR 2983669.
  4. ^ "Cochran's Theorem (A quick tutorial)" (PDF).
  5. ^ "Cochran's theorem", A Dictionary of Statistics, Oxford University Press, 2008-01-01, doi:10.1093/acref/9780199541454.001.0001/acref-9780199541454-e-294, ISBN 978-0-19-954145-4, retrieved 2022-05-18
  • v
  • t
  • e
Scientific
methodTreatment
and blockingModels
and inferenceDesigns

Completely
randomized