Complex Wishart distribution

Complex Wishart
Notation A ~ CWp( Γ {\displaystyle \Gamma } , n)
Parameters n > p − 1 degrees of freedom (real)
Γ {\displaystyle \Gamma } > 0 (p × p Hermitian pos. def)
Support A (p × p) Hermitian positive definite matrix
PDF

det ( A ) ( n p ) e tr ( Γ 1 A ) det ( Γ ) n C Γ ~ p ( n ) {\displaystyle {\frac {\det \left(\mathbf {A} \right)^{(n-p)}e^{-\operatorname {tr} (\mathbf {\Gamma } ^{-1}\mathbf {A} )}}{\det \left(\mathbf {\Gamma } \right)^{n}\cdot {\mathcal {C}}{\widetilde {\Gamma }}_{p}(n)}}}

  • C Γ ~ p {\displaystyle {\mathcal {C}}{\widetilde {\mathbf {\Gamma } }}_{p}} is the p {\displaystyle p} -variate complex multivariate gamma function
  • tr is the trace function
Mean E [ A ] = n Γ {\displaystyle \operatorname {E} [A]=n\Gamma }
Mode ( n p ) Γ {\displaystyle (n-p)\mathbf {\Gamma } } for np + 1
CF det ( I p i Γ Θ ) n {\displaystyle \det \left(I_{p}-i\mathbf {\Gamma } \mathbf {\Theta } \right)^{-n}}

In statistics, the complex Wishart distribution is a complex version of the Wishart distribution. It is the distribution of n {\displaystyle n} times the sample Hermitian covariance matrix of n {\displaystyle n} zero-mean independent Gaussian random variables. It has support for p × p {\displaystyle p\times p} Hermitian positive definite matrices.[1]

The complex Wishart distribution is the density of a complex-valued sample covariance matrix. Let

S p × p = i = 1 n G i G i H {\displaystyle S_{p\times p}=\sum _{i=1}^{n}G_{i}G_{i}^{H}}

where each G i {\displaystyle G_{i}} is an independent column p-vector of random complex Gaussian zero-mean samples and ( . ) H {\displaystyle (.)^{H}} is an Hermitian (complex conjugate) transpose. If the covariance of G is E [ G G H ] = M {\displaystyle \mathbb {E} [GG^{H}]=M} then

S n C W ( M , n , p ) {\displaystyle S\sim n{\mathcal {CW}}(M,n,p)}

where C W ( M , n , p ) {\displaystyle {\mathcal {CW}}(M,n,p)} is the complex central Wishart distribution with n degrees of freedom and mean value, or scale matrix, M.

f S ( S ) = | S | n p e tr ( M 1 S ) | M | n C Γ ~ p ( n ) , n p , | M | > 0 {\displaystyle f_{S}(\mathbf {S} )={\frac {\left|\mathbf {S} \right|^{n-p}e^{-\operatorname {tr} (\mathbf {M} ^{-1}\mathbf {S} )}}{\left|\mathbf {M} \right|^{n}\cdot {\mathcal {C}}{\widetilde {\Gamma }}_{p}(n)}},\;\;\;n\geq p,\;\;\;\left|\mathbf {M} \right|>0}

where

C Γ ~ p ( n ) = π p ( p 1 ) / 2 j = 1 p Γ ( n j + 1 ) {\displaystyle {\mathcal {C}}{\widetilde {\Gamma }}_{p}^{}(n)=\pi ^{p(p-1)/2}\prod _{j=1}^{p}\Gamma (n-j+1)}

is the complex multivariate Gamma function.[2]

Using the trace rotation rule tr ( A B C ) = tr ( C A B ) {\displaystyle \operatorname {tr} (ABC)=\operatorname {tr} (CAB)} we also get

f S ( S ) = | S | n p | M | n C Γ ~ p ( n ) exp ( i = 1 p G i H M 1 G i ) {\displaystyle f_{S}(\mathbf {S} )={\frac {\left|\mathbf {S} \right|^{n-p}}{\left|\mathbf {M} \right|^{n}\cdot {\mathcal {C}}{\widetilde {\Gamma }}_{p}(n)}}\exp \left(-\sum _{i=1}^{p}G_{i}^{H}\mathbf {M} ^{-1}G_{i}\right)}

which is quite close to the complex multivariate pdf of G itself. The elements of G conventionally have circular symmetry such that E [ G G T ] = 0 {\displaystyle \mathbb {E} [GG^{T}]=0} .

Inverse Complex Wishart The distribution of the inverse complex Wishart distribution of Y = S 1 {\displaystyle \mathbf {Y} =\mathbf {S^{-1}} } according to Goodman,[2] Shaman[3] is

f Y ( Y ) = | Y | ( n + p ) e tr ( M Y 1 ) | M | n C Γ ~ p ( n ) , n p , det ( Y ) > 0 {\displaystyle f_{Y}(\mathbf {Y} )={\frac {\left|\mathbf {Y} \right|^{-(n+p)}e^{-\operatorname {tr} (\mathbf {M} \mathbf {Y^{-1}} )}}{\left|\mathbf {M} \right|^{-n}\cdot {\mathcal {C}}{\widetilde {\Gamma }}_{p}(n)}},\;\;\;n\geq p,\;\;\;\det \left(\mathbf {Y} \right)>0}

where M = Γ 1 {\displaystyle \mathbf {M} =\mathbf {\Gamma ^{-1}} } .

If derived via a matrix inversion mapping, the result depends on the complex Jacobian determinant

C J Y ( Y 1 ) = | Y | 2 p 2 {\displaystyle {\mathcal {C}}J_{Y}(Y^{-1})=\left|Y\right|^{-2p-2}}

Goodman and others[4] discuss such complex Jacobians.

Eigenvalues

The probability distribution of the eigenvalues of the complex Hermitian Wishart distribution are given by, for example, James[5] and Edelman.[6] For a p × p {\displaystyle p\times p} matrix with ν p {\displaystyle \nu \geq p} degrees of freedom we have

f ( λ 1 λ p ) = K ~ ν , p exp ( 1 2 i = 1 p λ i ) i = 1 p λ i ν p i < j ( λ i λ j ) 2 d λ 1 d λ p , λ i R 0 {\displaystyle f(\lambda _{1}\dots \lambda _{p})={\tilde {K}}_{\nu ,p}\exp \left(-{\frac {1}{2}}\sum _{i=1}^{p}\lambda _{i}\right)\prod _{i=1}^{p}\lambda _{i}^{\nu -p}\prod _{i<j}(\lambda _{i}-\lambda _{j})^{2}d\lambda _{1}\dots d\lambda _{p},\;\;\;\lambda _{i}\in \mathbb {R} \geq 0}

where

K ~ ν , p 1 = 2 p ν i = 1 p Γ ( ν i + 1 ) Γ ( p i + 1 ) {\displaystyle {\tilde {K}}_{\nu ,p}^{-1}=2^{p\nu }\prod _{i=1}^{p}\Gamma (\nu -i+1)\Gamma (p-i+1)}

Note however that Edelman uses the "mathematical" definition of a complex normal variable Z = X + i Y {\displaystyle Z=X+iY} where iid X and Y each have unit variance and the variance of Z = E ( X 2 + Y 2 ) = 2 {\displaystyle Z=\mathbf {E} \left(X^{2}+Y^{2}\right)=2} . For the definition more common in engineering circles, with X and Y each having 0.5 variance, the eigenvalues are reduced by a factor of 2.

While this expression gives little insight, there are approximations for marginal eigenvalue distributions. From Edelman we have that if S is a sample from the complex Wishart distribution with p = κ ν , 0 κ 1 {\displaystyle p=\kappa \nu ,\;\;0\leq \kappa \leq 1} such that S p × p C W ( 2 I , p κ ) {\displaystyle S_{p\times p}\sim {\mathcal {CW}}\left(2\mathbf {I} ,{\frac {p}{\kappa }}\right)} then in the limit p {\displaystyle p\rightarrow \infty } the distribution of eigenvalues converges in probability to the Marchenko–Pastur distribution function

p λ ( λ ) = [ λ / 2 ( κ 1 ) 2 ] [ κ + 1 ) 2 λ / 2 ] 4 π κ ( λ / 2 ) , 2 ( κ 1 ) 2 λ 2 ( κ + 1 ) 2 , 0 κ 1 {\displaystyle p_{\lambda }(\lambda )={\frac {\sqrt {[\lambda /2-({\sqrt {\kappa }}-1)^{2}][{\sqrt {\kappa }}+1)^{2}-\lambda /2]}}{4\pi \kappa (\lambda /2)}},\;\;\;2({\sqrt {\kappa }}-1)^{2}\leq \lambda \leq 2({\sqrt {\kappa }}+1)^{2},\;\;\;0\leq \kappa \leq 1}

This distribution becomes identical to the real Wishart case, by replacing λ {\displaystyle \lambda } by 2 λ {\displaystyle 2\lambda } , on account of the doubled sample variance, so in the case S p × p C W ( I , p κ ) {\displaystyle S_{p\times p}\sim {\mathcal {CW}}\left(\mathbf {I} ,{\frac {p}{\kappa }}\right)} , the pdf reduces to the real Wishart one:

p λ ( λ ) = [ λ ( κ 1 ) 2 ] [ κ + 1 ) 2 λ ] 2 π κ λ , ( κ 1 ) 2 λ ( κ + 1 ) 2 , 0 κ 1 {\displaystyle p_{\lambda }(\lambda )={\frac {\sqrt {[\lambda -({\sqrt {\kappa }}-1)^{2}][{\sqrt {\kappa }}+1)^{2}-\lambda ]}}{2\pi \kappa \lambda }},\;\;\;({\sqrt {\kappa }}-1)^{2}\leq \lambda \leq ({\sqrt {\kappa }}+1)^{2},\;\;\;0\leq \kappa \leq 1}

A special case is κ = 1 {\displaystyle \kappa =1}

p λ ( λ ) = 1 4 π ( 8 λ λ ) 1 2 , 0 λ 8 {\displaystyle p_{\lambda }(\lambda )={\frac {1}{4\pi }}\left({\frac {8-\lambda }{\lambda }}\right)^{\frac {1}{2}},\;0\leq \lambda \leq 8}

or, if a Var(Z) = 1 convention is used then

p λ ( λ ) = 1 2 π ( 4 λ λ ) 1 2 , 0 λ 4 {\displaystyle p_{\lambda }(\lambda )={\frac {1}{2\pi }}\left({\frac {4-\lambda }{\lambda }}\right)^{\frac {1}{2}},\;0\leq \lambda \leq 4} .

The Wigner semicircle distribution arises by making the change of variable y = ± λ {\displaystyle y=\pm {\sqrt {\lambda }}} in the latter and selecting the sign of y randomly yielding pdf

p y ( y ) = 1 2 π ( 4 y 2 ) 1 2 , 2 y 2 {\displaystyle p_{y}(y)={\frac {1}{2\pi }}\left(4-y^{2}\right)^{\frac {1}{2}},\;-2\leq y\leq 2}

In place of the definition of the Wishart sample matrix above, S p × p = j = 1 ν G j G j H {\displaystyle S_{p\times p}=\sum _{j=1}^{\nu }G_{j}G_{j}^{H}} , we can define a Gaussian ensemble

G i , j = [ G 1 G ν ] C p × ν {\displaystyle \mathbf {G} _{i,j}=[G_{1}\dots G_{\nu }]\in \mathbb {C} ^{\,p\times \nu }}

such that S is the matrix product S = G G H {\displaystyle S=\mathbf {G} \mathbf {G^{H}} } . The real non-negative eigenvalues of S are then the modulus-squared singular values of the ensemble G {\displaystyle \mathbf {G} } and the moduli of the latter have a quarter-circle distribution.

In the case κ > 1 {\displaystyle \kappa >1} such that ν < p {\displaystyle \nu <p} then S {\displaystyle S} is rank deficient with at least p ν {\displaystyle p-\nu } null eigenvalues. However the singular values of G {\displaystyle \mathbf {G} } are invariant under transposition so, redefining S ~ = G H G {\displaystyle {\tilde {S}}=\mathbf {G^{H}} \mathbf {G} } , then S ~ ν × ν {\displaystyle {\tilde {S}}_{\nu \times \nu }} has a complex Wishart distribution, has full rank almost certainly, and eigenvalue distributions can be obtained from S ~ {\displaystyle {\tilde {S}}} in lieu, using all the previous equations.

In cases where the columns of G {\displaystyle \mathbf {G} } are not linearly independent and S ~ ν × ν {\displaystyle {\tilde {S}}_{\nu \times \nu }} remains singular, a QR decomposition can be used to reduce G to a product like

G = Q [ R 0 ] {\displaystyle \mathbf {G} =Q{\begin{bmatrix}\mathbf {R} \\0\end{bmatrix}}}

such that R q × q , q ν {\displaystyle \mathbf {R} _{q\times q},\;\;q\leq \nu } is upper triangular with full rank and S ~ ~ q × q = R H R {\displaystyle {\tilde {\tilde {S}}}_{q\times q}=\mathbf {R^{H}} \mathbf {R} } has further reduced dimensionality.

The eigenvalues are of practical significance in radio communications theory since they define the Shannon channel capacity of a ν × p {\displaystyle \nu \times p} MIMO wireless channel which, to first approximation, is modeled as a zero-mean complex Gaussian ensemble.

References

  1. ^ N. R. Goodman (1963). "The distribution of the determinant of a complex Wishart distributed matrix". The Annals of Mathematical Statistics. 34 (1): 178–180. doi:10.1214/aoms/1177704251.
  2. ^ a b Goodman, N R (1963). "Statistical analysis based on a certain multivariate complex Gaussian distribution (an introduction)". Ann. Math. Statist. 34: 152–177. doi:10.1214/aoms/1177704250.
  3. ^ Shaman, Paul (1980). "The Inverted Complex Wishart Distribution and Its Application to Spectral Estimation". Journal of Multivariate Analysis. 10: 51–59. doi:10.1016/0047-259X(80)90081-0.
  4. ^ Cross, D J (May 2008). "On the Relation between Real and Complex Jacobian Determinants" (PDF). drexel.edu.
  5. ^ James, A. T. (1964). "Distributions of Matrix Variates and Latent Roots Derived from Normal Samples". Ann. Math. Statist. 35 (2): 475–501. doi:10.1214/aoms/1177703550.
  6. ^ Edelman, Alan (October 1988). "Eigenvalues and Condition Numbers of Random Matrices" (PDF). SIAM J. Matrix Anal. Appl. 9 (4): 543–560. doi:10.1137/0609045. hdl:1721.1/14322.
  • v
  • t
  • e


  • v
  • t
  • e
Probability distributions (list)
Discrete
univariate
with finite
support
with infinite
support
Continuous
univariate
supported on a
bounded interval
supported on a
semi-infinite
interval
supported
on the whole
real line
with support
whose type varies
Mixed
univariate
continuous-
discrete
Multivariate
(joint)
Directional
Degenerate
and singular
Degenerate
Dirac delta function
Singular
Cantor
Families
  • Category
  • Commons