Distribution of the product of two random variables

Probability distribution

A product distribution is a probability distribution constructed as the distribution of the product of random variables having two other known distributions. Given two statistically independent random variables X and Y, the distribution of the random variable Z that is formed as the product Z = X Y {\displaystyle Z=XY} is a product distribution.

The product distribution is the PDF of the product of sample values. This is not the same as the product of their PDF's yet the concepts are often ambiguously termed as "product of Gaussians".

Algebra of random variables

The product is one type of algebra for random variables: Related to the product distribution are the ratio distribution, sum distribution (see List of convolutions of probability distributions) and difference distribution. More generally, one may talk of combinations of sums, differences, products and ratios.

Many of these distributions are described in Melvin D. Springer's book from 1979 The Algebra of Random Variables.[1]

Derivation for independent random variables

If X {\displaystyle X} and Y {\displaystyle Y} are two independent, continuous random variables, described by probability density functions f X {\displaystyle f_{X}} and f Y {\displaystyle f_{Y}} then the probability density function of Z = X Y {\displaystyle Z=XY} is[2]

f Z ( z ) = f X ( x ) f Y ( z / x ) 1 | x | d x . {\displaystyle f_{Z}(z)=\int _{-\infty }^{\infty }f_{X}(x)f_{Y}(z/x){\frac {1}{|x|}}\,dx.}

Proof

We first write the cumulative distribution function of Z {\displaystyle Z} starting with its definition

F Z ( z ) = def   P ( Z z ) = P ( X Y z ) = P ( X Y z , X 0 ) + P ( X Y z , X 0 ) = P ( Y z / X , X 0 ) + P ( Y z / X , X 0 ) = 0 f X ( x ) z / x f Y ( y ) d y d x + 0 f X ( x ) z / x f Y ( y ) d y d x {\displaystyle {\begin{aligned}F_{Z}(z)&\,{\stackrel {\text{def}}{=}}\ \mathbb {P} (Z\leq z)\\&=\mathbb {P} (XY\leq z)\\&=\mathbb {P} (XY\leq z,X\geq 0)+\mathbb {P} (XY\leq z,X\leq 0)\\&=\mathbb {P} (Y\leq z/X,X\geq 0)+\mathbb {P} (Y\geq z/X,X\leq 0)\\&=\int _{0}^{\infty }f_{X}(x)\int _{-\infty }^{z/x}f_{Y}(y)\,dy\,dx+\int _{-\infty }^{0}f_{X}(x)\int _{z/x}^{\infty }f_{Y}(y)\,dy\,dx\end{aligned}}}

We find the desired probability density function by taking the derivative of both sides with respect to z {\displaystyle z} . Since on the right hand side, z {\displaystyle z} appears only in the integration limits, the derivative is easily performed using the fundamental theorem of calculus and the chain rule. (Note the negative sign that is needed when the variable occurs in the lower limit of the integration.)

f Z ( z ) = 0 f X ( x ) f Y ( z / x ) 1 x d x 0 f X ( x ) f Y ( z / x ) 1 x d x = 0 f X ( x ) f Y ( z / x ) 1 | x | d x + 0 f X ( x ) f Y ( z / x ) 1 | x | d x = f X ( x ) f Y ( z / x ) 1 | x | d x . {\displaystyle {\begin{aligned}f_{Z}(z)&=\int _{0}^{\infty }f_{X}(x)f_{Y}(z/x){\frac {1}{x}}\,dx-\int _{-\infty }^{0}f_{X}(x)f_{Y}(z/x){\frac {1}{x}}\,dx\\&=\int _{0}^{\infty }f_{X}(x)f_{Y}(z/x){\frac {1}{|x|}}\,dx+\int _{-\infty }^{0}f_{X}(x)f_{Y}(z/x){\frac {1}{|x|}}\,dx\\&=\int _{-\infty }^{\infty }f_{X}(x)f_{Y}(z/x){\frac {1}{|x|}}\,dx.\end{aligned}}}

where the absolute value is used to conveniently combine the two terms.[3]

Alternate proof

A faster more compact proof begins with the same step of writing the cumulative distribution of Z {\displaystyle Z} starting with its definition:

F Z ( z ) = d e f     P ( Z z ) = P ( X Y z ) = f X ( x ) f Y ( y ) u ( z x y ) d y d x {\displaystyle {\begin{aligned}F_{Z}(z)&{\overset {\underset {\mathrm {def} }{}}{=}}\ \ \mathbb {P} (Z\leq z)\\&=\mathbb {P} (XY\leq z)\\&=\int _{-\infty }^{\infty }\int _{-\infty }^{\infty }f_{X}(x)f_{Y}(y)u(z-xy)\,dy\,dx\end{aligned}}}

where u ( ) {\displaystyle u(\cdot )} is the Heaviside step function and serves to limit the region of integration to values of x {\displaystyle x} and y {\displaystyle y} satisfying x y z {\displaystyle xy\leq z} .

We find the desired probability density function by taking the derivative of both sides with respect to z {\displaystyle z} .

f Z ( z ) = f X ( x ) f Y ( y ) δ ( z x y ) d y d x = f X ( x ) [ f Y ( y ) δ ( z x y ) d y ] d x = f X ( x ) f Y ( z / x ) 1 | x | d x . {\displaystyle {\begin{aligned}f_{Z}(z)&=\int _{-\infty }^{\infty }\int _{-\infty }^{\infty }f_{X}(x)f_{Y}(y)\delta (z-xy)\,dy\,dx\\&=\int _{-\infty }^{\infty }f_{X}(x)\left[\int _{-\infty }^{\infty }f_{Y}(y)\delta (z-xy)\,dy\right]\,dx\\&=\int _{-\infty }^{\infty }f_{X}(x)f_{Y}(z/x){\frac {1}{|x|}}\,dx.\end{aligned}}}

where we utilize the translation and scaling properties of the Dirac delta function δ {\displaystyle \delta } .

A more intuitive description of the procedure is illustrated in the figure below. The joint pdf f X ( x ) f Y ( y ) {\displaystyle f_{X}(x)f_{Y}(y)} exists in the x {\displaystyle x} - y {\displaystyle y} plane and an arc of constant z {\displaystyle z} value is shown as the shaded line. To find the marginal probability f Z ( z ) {\displaystyle f_{Z}(z)} on this arc, integrate over increments of area d x d y f ( x , y ) {\displaystyle dx\,dy\;f(x,y)} on this contour.

Diagram to illustrate the product distribution of two variables.

Starting with y = z x {\displaystyle y={\frac {z}{x}}} , we have d y = z x 2 d x = y x d x {\displaystyle dy=-{\frac {z}{x^{2}}}\,dx=-{\frac {y}{x}}\,dx} . So the probability increment is δ p = f ( x , y ) d x | d y | = f X ( x ) f Y ( z / x ) y | x | d x d x {\displaystyle \delta p=f(x,y)\,dx\,|dy|=f_{X}(x)f_{Y}(z/x){\frac {y}{|x|}}\,dx\,dx} . Since z = y x {\displaystyle z=yx} implies d z = y d x {\displaystyle dz=y\,dx} , we can relate the probability increment to the z {\displaystyle z} -increment, namely δ p = f X ( x ) f Y ( z / x ) 1 | x | d x d z {\displaystyle \delta p=f_{X}(x)f_{Y}(z/x){\frac {1}{|x|}}\,dx\,dz} . Then integration over x {\displaystyle x} , yields f Z ( z ) = f X ( x ) f Y ( z / x ) 1 | x | d x {\displaystyle f_{Z}(z)=\int f_{X}(x)f_{Y}(z/x){\frac {1}{|x|}}\,dx} .

A Bayesian interpretation

Let X f ( x ) {\displaystyle X\sim f(x)} be a random sample drawn from probability distribution f x ( x ) {\displaystyle f_{x}(x)} . Scaling X {\displaystyle X} by θ {\displaystyle \theta } generates a sample from scaled distribution θ X 1 | θ | f X ( x θ ) {\displaystyle \theta X\sim {\frac {1}{|\theta |}}f_{X}\left({\frac {x}{\theta }}\right)} which can be written as a conditional distribution g x ( x | θ ) = 1 | θ | f x ( x θ ) {\displaystyle g_{x}(x|\theta )={\frac {1}{|\theta |}}f_{x}\left({\frac {x}{\theta }}\right)} .

Letting θ {\displaystyle \theta } be a random variable with pdf f θ ( θ ) {\displaystyle f_{\theta }(\theta )} , the distribution of the scaled sample becomes f X ( θ x ) = g X ( x θ ) f θ ( θ ) {\displaystyle f_{X}(\theta x)=g_{X}(x\mid \theta )f_{\theta }(\theta )} and integrating out θ {\displaystyle \theta } we get h x ( x ) = g X ( x | θ ) f θ ( θ ) d θ {\displaystyle h_{x}(x)=\int _{-\infty }^{\infty }g_{X}(x|\theta )f_{\theta }(\theta )d\theta } so θ X {\displaystyle \theta X} is drawn from this distribution θ X h X ( x ) {\displaystyle \theta X\sim h_{X}(x)} . However, substituting the definition of g {\displaystyle g} we also have h X ( x ) = 1 | θ | f x ( x θ ) f θ ( θ ) d θ {\displaystyle h_{X}(x)=\int _{-\infty }^{\infty }{\frac {1}{|\theta |}}f_{x}\left({\frac {x}{\theta }}\right)f_{\theta }(\theta )\,d\theta } which has the same form as the product distribution above. Thus the Bayesian posterior distribution h X ( x ) {\displaystyle h_{X}(x)} is the distribution of the product of the two independent random samples θ {\displaystyle \theta } and X {\displaystyle X} .

For the case of one variable being discrete, let θ {\displaystyle \theta } have probability P i {\displaystyle P_{i}} at levels θ i {\displaystyle \theta _{i}} with i P i = 1 {\displaystyle \sum _{i}P_{i}=1} . The conditional density is f X ( x θ i ) = 1 | θ i | f x ( x θ i ) {\displaystyle f_{X}(x\mid \theta _{i})={\frac {1}{|\theta _{i}|}}f_{x}\left({\frac {x}{\theta _{i}}}\right)} . Therefore f X ( θ x ) = i P i | θ i | f X ( x θ i ) {\displaystyle f_{X}(\theta x)=\sum _{i}{\frac {P_{i}}{|\theta _{i}|}}f_{X}\left({\frac {x}{\theta _{i}}}\right)} .

Expectation of product of random variables

When two random variables are statistically independent, the expectation of their product is the product of their expectations. This can be proved from the law of total expectation:

E ( X Y ) = E ( E ( X Y Y ) ) {\displaystyle \operatorname {E} (XY)=\operatorname {E} (\operatorname {E} (XY\mid Y))}

In the inner expression, Y is a constant. Hence:

E ( X Y Y ) = Y E [ X Y ] {\displaystyle \operatorname {E} (XY\mid Y)=Y\cdot \operatorname {E} [X\mid Y]}
E ( X Y ) = E ( Y E [ X Y ] ) {\displaystyle \operatorname {E} (XY)=\operatorname {E} (Y\cdot \operatorname {E} [X\mid Y])}

This is true even if X and Y are statistically dependent in which case E [ X Y ] {\displaystyle \operatorname {E} [X\mid Y]} is a function of Y. In the special case in which X and Y are statistically independent, it is a constant independent of Y. Hence:

E ( X Y ) = E ( Y E [ X ] ) {\displaystyle \operatorname {E} (XY)=\operatorname {E} (Y\cdot \operatorname {E} [X])}
E ( X Y ) = E ( X ) E ( Y ) {\displaystyle \operatorname {E} (XY)=\operatorname {E} (X)\cdot \operatorname {E} (Y)}

Variance of the product of independent random variables

Let X , Y {\displaystyle X,Y} be uncorrelated random variables with means μ X , μ Y , {\displaystyle \mu _{X},\mu _{Y},} and variances σ X 2 , σ Y 2 {\displaystyle \sigma _{X}^{2},\sigma _{Y}^{2}} . If, additionally, the random variables X 2 {\displaystyle X^{2}} and Y 2 {\displaystyle Y^{2}} are uncorrelated, then the variance of the product XY is[4]

Var ( X Y ) = ( σ X 2 + μ X 2 ) ( σ Y 2 + μ Y 2 ) μ X 2 μ Y 2 {\displaystyle \operatorname {Var} (XY)=(\sigma _{X}^{2}+\mu _{X}^{2})(\sigma _{Y}^{2}+\mu _{Y}^{2})-\mu _{X}^{2}\mu _{Y}^{2}}

In the case of the product of more than two variables, if X 1 X n , n > 2 {\displaystyle X_{1}\cdots X_{n},\;\;n>2} are statistically independent then[5] the variance of their product is

Var ( X 1 X 2 X n ) = i = 1 n ( σ i 2 + μ i 2 ) i = 1 n μ i 2 {\displaystyle \operatorname {Var} (X_{1}X_{2}\cdots X_{n})=\prod _{i=1}^{n}(\sigma _{i}^{2}+\mu _{i}^{2})-\prod _{i=1}^{n}\mu _{i}^{2}}

Characteristic function of product of random variables

Assume X, Y are independent random variables. The characteristic function of X is φ X ( t ) {\displaystyle \varphi _{X}(t)} , and the distribution of Y is known. Then from the law of total expectation, we have[6]

φ Z ( t ) = E ( e i t X Y ) = E ( E ( e i t X Y Y ) ) = E ( φ X ( t Y ) ) {\displaystyle {\begin{aligned}\varphi _{Z}(t)&=\operatorname {E} (e^{itXY})\\&=\operatorname {E} (\operatorname {E} (e^{itXY}\mid Y))\\&=\operatorname {E} (\varphi _{X}(tY))\end{aligned}}}

If the characteristic functions and distributions of both X and Y are known, then alternatively, φ Z ( t ) = E ( φ Y ( t X ) ) {\displaystyle \varphi _{Z}(t)=\operatorname {E} (\varphi _{Y}(tX))} also holds.

Mellin transform

The Mellin transform of a distribution f ( x ) {\displaystyle f(x)} with support only on x 0 {\displaystyle x\geq 0} and having a random sample X {\displaystyle X} is

M f ( x ) = φ ( s ) = 0 x s 1 f ( x ) d x = E [ X s 1 ] . {\displaystyle {\mathcal {M}}f(x)=\varphi (s)=\int _{0}^{\infty }x^{s-1}f(x)\,dx=\operatorname {E} [X^{s-1}].}

The inverse transform is

M 1 φ ( s ) = f ( x ) = 1 2 π i c i c + i x s φ ( s ) d s . {\displaystyle {\mathcal {M}}^{-1}\varphi (s)=f(x)={\frac {1}{2\pi i}}\int _{c-i\infty }^{c+i\infty }x^{-s}\varphi (s)\,ds.}

if X  and  Y {\displaystyle X{\text{ and }}Y} are two independent random samples from different distributions, then the Mellin transform of their product is equal to the product of their Mellin transforms:

M X Y ( s ) = M X ( s ) M Y ( s ) {\displaystyle {\mathcal {M}}_{XY}(s)={\mathcal {M}}_{X}(s){\mathcal {M}}_{Y}(s)}

If s is restricted to integer values, a simpler result is

E [ ( X Y ) n ] = E [ X n ] E [ Y n ] {\displaystyle \operatorname {E} [(XY)^{n}]=\operatorname {E} [X^{n}]\;\operatorname {E} [Y^{n}]}

Thus the moments of the random product X Y {\displaystyle XY} are the product of the corresponding moments of X  and  Y {\displaystyle X{\text{ and }}Y} and this extends to non-integer moments, for example

E [ ( X Y ) 1 / p ] = E [ X 1 / p ] E [ Y 1 / p ] . {\displaystyle \operatorname {E} [{(XY)^{1/p}}]=\operatorname {E} [X^{1/p}]\;\operatorname {E} [Y^{1/p}].}

The pdf of a function can be reconstructed from its moments using the saddlepoint approximation method.

A further result is that for independent X, Y

E [ X p Y q ] = E [ X p ] E [ Y q ] {\displaystyle \operatorname {E} [X^{p}Y^{q}]=\operatorname {E} [X^{p}]\operatorname {E} [Y^{q}]}

Gamma distribution example To illustrate how the product of moments yields a much simpler result than finding the moments of the distribution of the product, let X , Y {\displaystyle X,Y} be sampled from two Gamma distributions, f G a m m a ( x ; θ , 1 ) = Γ ( θ ) 1 x θ 1 e x {\displaystyle f_{Gamma}(x;\theta ,1)=\Gamma (\theta )^{-1}x^{\theta -1}e^{-x}} with parameters θ = α , β {\displaystyle \theta =\alpha ,\beta } whose moments are

E [ X p ] = 0 x p Γ ( x , θ ) d x = Γ ( θ + p ) Γ ( θ ) . {\displaystyle \operatorname {E} [X^{p}]=\int _{0}^{\infty }x^{p}\Gamma (x,\theta )\,dx={\frac {\Gamma (\theta +p)}{\Gamma (\theta )}}.}

Multiplying the corresponding moments gives the Mellin transform result

E [ ( X Y ) p ] = E [ X p ] E [ Y p ] = Γ ( α + p ) Γ ( α ) Γ ( β + p ) Γ ( β ) {\displaystyle \operatorname {E} [(XY)^{p}]=\operatorname {E} [X^{p}]\;\operatorname {E} [Y^{p}]={\frac {\Gamma (\alpha +p)}{\Gamma (\alpha )}}\;{\frac {\Gamma (\beta +p)}{\Gamma (\beta )}}}

Independently, it is known that the product of two independent Gamma-distributed samples (~Gamma(α,1) and Gamma(β,1)) has a K-distribution:

f ( z , α , β ) = 2 Γ ( α ) 1 Γ ( β ) 1 z α + β 2 1 K α β ( 2 z ) = 1 α β f K ( z α β ; 1 , α , β ) , z 0 {\displaystyle f(z,\alpha ,\beta )=2\Gamma (\alpha )^{-1}\Gamma (\beta )^{-1}z^{{\frac {\alpha +\beta }{2}}-1}K_{\alpha -\beta }(2{\sqrt {z}})={\frac {1}{\alpha \beta }}f_{K}\left({\frac {z}{\alpha \beta }};1,\alpha ,\beta \right),\;z\geq 0}

To find the moments of this, make the change of variable y = 2 z {\displaystyle y=2{\sqrt {z}}} , simplifying similar integrals to:

0 z p K ν ( 2 z ) d z = 2 2 p 1 0 y 2 p + 1 K ν ( y ) d y {\displaystyle \int _{0}^{\infty }z^{p}K_{\nu }(2{\sqrt {z}})\,dz=2^{-2p-1}\int _{0}^{\infty }y^{2p+1}K_{\nu }(y)\,dy}

thus

2 0 z α + β 2 1 K α β ( 2 z ) d z = 2 ( α + β ) 2 p + 1 0 y ( α + β ) + 2 p 1 K α β ( y ) d y {\displaystyle 2\int _{0}^{\infty }z^{{\frac {\alpha +\beta }{2}}-1}K_{\alpha -\beta }(2{\sqrt {z}})\,dz=2^{-(\alpha +\beta )-2p+1}\int _{0}^{\infty }y^{(\alpha +\beta )+2p-1}K_{\alpha -\beta }(y)\,dy}

The definite integral

0 y μ K ν ( y ) d y = 2 μ 1 Γ ( 1 + μ + ν 2 ) Γ ( 1 + μ ν 2 ) {\displaystyle \int _{0}^{\infty }y^{\mu }K_{\nu }(y)\,dy=2^{\mu -1}\Gamma \left({\frac {1+\mu +\nu }{2}}\right)\Gamma \left({\frac {1+\mu -\nu }{2}}\right)} is well documented and we have finally
E [ Z p ] = 2 ( α + β ) 2 p + 1 2 ( α + β ) + 2 p 1 Γ ( α ) Γ ( β ) Γ ( ( α + β + 2 p ) + ( α β ) 2 ) Γ ( ( α + β + 2 p ) ( α β ) 2 ) = Γ ( α + p ) Γ ( β + p ) Γ ( α ) Γ ( β ) {\displaystyle {\begin{aligned}E[Z^{p}]&={\frac {2^{-(\alpha +\beta )-2p+1}\;2^{(\alpha +\beta )+2p-1}}{\Gamma (\alpha )\;\Gamma (\beta )}}\Gamma \left({\frac {(\alpha +\beta +2p)+(\alpha -\beta )}{2}}\right)\Gamma \left({\frac {(\alpha +\beta +2p)-(\alpha -\beta )}{2}}\right)\\\\&={\frac {\Gamma (\alpha +p)\,\Gamma (\beta +p)}{\Gamma (\alpha )\,\Gamma (\beta )}}\end{aligned}}}

which, after some difficulty, has agreed with the moment product result above.

If X, Y are drawn independently from Gamma distributions with shape parameters α , β {\displaystyle \alpha ,\;\beta } then

E [ X p Y q ] = E [ X p ] E [ Y q ] = Γ ( α + p ) Γ ( α ) Γ ( β + q ) Γ ( β ) {\displaystyle \operatorname {E} [X^{p}Y^{q}]=\operatorname {E} [X^{p}]\;\operatorname {E} [Y^{q}]={\frac {\Gamma (\alpha +p)}{\Gamma (\alpha )}}\;{\frac {\Gamma (\beta +q)}{\Gamma (\beta )}}}

This type of result is universally true, since for bivariate independent variables f X , Y ( x , y ) = f X ( x ) f Y ( y ) {\displaystyle f_{X,Y}(x,y)=f_{X}(x)f_{Y}(y)} thus

E [ X p Y q ] = x = y = x p y q f X , Y ( x , y ) d y d x = x = x p [ y = y q f Y ( y ) d y ] f X ( x ) d x = x = x p f X ( x ) d x y = y q f Y ( y ) d y = E [ X p ] E [ Y q ] {\displaystyle {\begin{aligned}\operatorname {E} [X^{p}Y^{q}]&=\int _{x=-\infty }^{\infty }\int _{y=-\infty }^{\infty }x^{p}y^{q}f_{X,Y}(x,y)\,dy\,dx\\&=\int _{x=-\infty }^{\infty }x^{p}{\Big [}\int _{y=-\infty }^{\infty }y^{q}f_{Y}(y)\,dy{\Big ]}f_{X}(x)\,dx\\&=\int _{x=-\infty }^{\infty }x^{p}f_{X}(x)\,dx\int _{y=-\infty }^{\infty }y^{q}f_{Y}(y)\,dy\\&=\operatorname {E} [X^{p}]\;\operatorname {E} [Y^{q}]\end{aligned}}}

or equivalently it is clear that X p  and  Y q {\displaystyle X^{p}{\text{ and }}Y^{q}} are independent variables.

Special cases

Lognormal distributions

The distribution of the product of two random variables which have lognormal distributions is again lognormal. This is itself a special case of a more general set of results where the logarithm of the product can be written as the sum of the logarithms. Thus, in cases where a simple result can be found in the list of convolutions of probability distributions, where the distributions to be convolved are those of the logarithms of the components of the product, the result might be transformed to provide the distribution of the product. However this approach is only useful where the logarithms of the components of the product are in some standard families of distributions.

Uniformly distributed independent random variables

Let Z {\displaystyle Z} be the product of two independent variables Z = X 1 X 2 {\displaystyle Z=X_{1}X_{2}} each uniformly distributed on the interval [0,1], possibly the outcome of a copula transformation. As noted in "Lognormal Distributions" above, PDF convolution operations in the Log domain correspond to the product of sample values in the original domain. Thus, making the transformation u = ln ( x ) {\displaystyle u=\ln(x)} , such that p U ( u ) | d u | = p X ( x ) | d x | {\displaystyle p_{U}(u)\,|du|=p_{X}(x)\,|dx|} , each variate is distributed independently on u as

p U ( u ) = p X ( x ) | d u / d x | = 1 x 1 = e u , < u 0 {\displaystyle p_{U}(u)={\frac {p_{X}(x)}{|du/dx|}}={\frac {1}{x^{-1}}}=e^{u},\;\;-\infty <u\leq 0} .

and the convolution of the two distributions is the autoconvolution

c ( y ) = u = 0 y e u e y u d u = u = y 0 e y d u = y e y , < y 0 {\displaystyle c(y)=\int _{u=0}^{y}e^{u}e^{y-u}du=-\int _{u=y}^{0}e^{y}du=-ye^{y},\;\;-\infty <y\leq 0}

Next retransform the variable to z = e y {\displaystyle z=e^{y}} yielding the distribution

c 2 ( z ) = c Y ( y ) / | d z / d y | = y e y e y = y = ln ( 1 / z ) {\displaystyle c_{2}(z)=c_{Y}(y)/|dz/dy|={\frac {-ye^{y}}{e^{y}}}=-y=\ln(1/z)} on the interval [0,1]

For the product of multiple (> 2) independent samples the characteristic function route is favorable. If we define y ~ = y {\displaystyle {\tilde {y}}=-y} then c ( y ~ ) {\displaystyle c({\tilde {y}})} above is a Gamma distribution of shape 1 and scale factor 1, c ( y ~ ) = y ~ e y ~ {\displaystyle c({\tilde {y}})={\tilde {y}}e^{-{\tilde {y}}}} , and its known CF is ( 1 i t ) 1 {\displaystyle (1-it)^{-1}} . Note that | d y ~ | = | d y | {\displaystyle |d{\tilde {y}}|=|dy|} so the Jacobian of the transformation is unity.

The convolution of n {\displaystyle n} independent samples from Y ~ {\displaystyle {\tilde {Y}}} therefore has CF ( 1 i t ) n {\displaystyle (1-it)^{-n}} which is known to be the CF of a Gamma distribution of shape n {\displaystyle n} :

c n ( y ~ ) = Γ ( n ) 1 y ~ ( n 1 ) e y ~ = Γ ( n ) 1 ( y ) ( n 1 ) e y {\displaystyle c_{n}({\tilde {y}})=\Gamma (n)^{-1}{\tilde {y}}^{(n-1)}e^{-{\tilde {y}}}=\Gamma (n)^{-1}(-y)^{(n-1)}e^{y}} .

Make the inverse transformation z = e y {\displaystyle z=e^{y}} to extract the PDF of the product of the n samples:

f n ( z ) = c n ( y ) | d z / d y | = Γ ( n ) 1 ( log z ) n 1 e y / e y = ( log z ) n 1 ( n 1 ) ! 0 < z 1 {\displaystyle f_{n}(z)={\frac {c_{n}(y)}{|dz/dy|}}=\Gamma (n)^{-1}{\Big (}-\log z{\Big )}^{n-1}e^{y}/e^{y}={\frac {{\Big (}-\log z{\Big )}^{n-1}}{(n-1)!\;\;\;}}\;\;\;0<z\leq 1}

The following, more conventional, derivation from Stackexchange[7] is consistent with this result. First of all, letting Z 2 = X 1 X 2 {\displaystyle Z_{2}=X_{1}X_{2}} its CDF is

F Z 2 ( z ) = Pr [ Z 2 z ] = x = 0 1 Pr [ X 2 z x ] f X 1 ( x ) d x = x = 0 z 1 d x + x = z 1 z x d x = z z log z , 0 < z 1 {\displaystyle {\begin{aligned}F_{Z_{2}}(z)=\Pr {\Big [}Z_{2}\leq z{\Big ]}&=\int _{x=0}^{1}\Pr {\Big [}X_{2}\leq {\frac {z}{x}}{\Big ]}f_{X_{1}}(x)\,dx\\&=\int _{x=0}^{z}1dx+\int _{x=z}^{1}{\frac {z}{x}}\,dx\\&=z-z\log z,\;\;0<z\leq 1\end{aligned}}}

The density of z 2  is then  f ( z 2 ) = log ( z 2 ) {\displaystyle z_{2}{\text{ is then }}f(z_{2})=-\log(z_{2})}

Multiplying by a third independent sample gives distribution function

F Z 3 ( z ) = Pr [ Z 3 z ] = x = 0 1 Pr [ X 3 z x ] f Z 2 ( x ) d x = x = 0 z log ( x ) d x x = z 1 z x log ( x ) d x = z ( log ( z ) 1 ) + 1 2 z log 2 ( z ) {\displaystyle {\begin{aligned}F_{Z_{3}}(z)=\Pr {\Big [}Z_{3}\leq z{\Big ]}&=\int _{x=0}^{1}\Pr {\Big [}X_{3}\leq {\frac {z}{x}}{\Big ]}f_{Z_{2}}(x)\,dx\\&=-\int _{x=0}^{z}\log(x)\,dx-\int _{x=z}^{1}{\frac {z}{x}}\log(x)\,dx\\&=-z{\Big (}\log(z)-1{\Big )}+{\frac {1}{2}}z\log ^{2}(z)\end{aligned}}}

Taking the derivative yields f Z 3 ( z ) = 1 2 log 2 ( z ) , 0 < z 1. {\displaystyle f_{Z_{3}}(z)={\frac {1}{2}}\log ^{2}(z),\;\;0<z\leq 1.}

The author of the note conjectures that, in general, f Z n ( z ) = ( log z ) n 1 ( n 1 ) ! , 0 < z 1 {\displaystyle f_{Z_{n}}(z)={\frac {(-\log z)^{n-1}}{(n-1)!\;\;\;}},\;\;0<z\leq 1}

The geometry of the product distribution of two random variables in the unit square.

The figure illustrates the nature of the integrals above. The area of the selection within the unit square and below the line z = xy, represents the CDF of z. This divides into two parts. The first is for 0 < x < z where the increment of area in the vertical slot is just equal to dx. The second part lies below the xy line, has y-height z/x, and incremental area dx z/x.

Independent central-normal distributions

The product of two independent Normal samples follows a modified Bessel function. Let x , y {\displaystyle x,y} be independent samples from a Normal(0,1) distribution and z = x y {\displaystyle z=xy} . Then

p Z ( z ) = K 0 ( | z | ) π , < z < + {\displaystyle p_{Z}(z)={\frac {K_{0}(|z|)}{\pi }},\;\;\;-\infty <z<+\infty }


The variance of this distribution could be determined, in principle, by a definite integral from Gradsheyn and Ryzhik,[8]

0 x μ K ν ( a x ) d x = 2 μ 1 a μ 1 Γ ( 1 + μ + ν 2 ) Γ ( 1 + μ ν 2 ) , a > 0 , ν + 1 ± μ > 0 {\displaystyle \int _{0}^{\infty }x^{\mu }K_{\nu }(ax)\,dx=2^{\mu -1}a^{-\mu -1}\Gamma {\Big (}{\frac {1+\mu +\nu }{2}}{\Big )}\Gamma {\Big (}{\frac {1+\mu -\nu }{2}}{\Big )},\;\;a>0,\;\nu +1\pm \mu >0}

thus E [ Z 2 ] = z 2 K 0 ( | z | ) π d z = 4 π Γ 2 ( 3 2 ) = 1 {\displaystyle \operatorname {E} [Z^{2}]=\int _{-\infty }^{\infty }{\frac {z^{2}K_{0}(|z|)}{\pi }}\,dz={\frac {4}{\pi }}\;\Gamma ^{2}{\Big (}{\frac {3}{2}}{\Big )}=1}

A much simpler result, stated in a section above, is that the variance of the product of zero-mean independent samples is equal to the product of their variances. Since the variance of each Normal sample is one, the variance of the product is also one.

The product of two Gaussian samples is often confused with the product of two Gaussian PDFs. The latter simply results in a bivariate Gaussian distribution.

Correlated central-normal distributions

The product of correlated Normal samples case was recently addressed by Nadarajaha and Pogány.[9] Let X Y {\displaystyle X{\text{, }}Y} be zero mean, unit variance, normally distributed variates with correlation coefficient ρ  and let  Z = X Y {\displaystyle \rho {\text{ and let }}Z=XY}

Then

f Z ( z ) = 1 π 1 ρ 2 exp ( ρ z 1 ρ 2 ) K 0 ( | z | 1 ρ 2 ) {\displaystyle f_{Z}(z)={\frac {1}{\pi {\sqrt {1-\rho ^{2}}}}}\exp \left({\frac {\rho z}{1-\rho ^{2}}}\right)K_{0}\left({\frac {|z|}{1-\rho ^{2}}}\right)}

Mean and variance: For the mean we have E [ Z ] = ρ {\displaystyle \operatorname {E} [Z]=\rho } from the definition of correlation coefficient. The variance can be found by transforming from two unit variance zero mean uncorrelated variables U, V. Let

X = U , Y = ρ U + ( 1 ρ 2 ) V {\displaystyle X=U,\;\;Y=\rho U+{\sqrt {(1-\rho ^{2})}}V}

Then X, Y are unit variance variables with correlation coefficient ρ {\displaystyle \rho } and

( X Y ) 2 = U 2 ( ρ U + ( 1 ρ 2 ) V ) 2 = U 2 ( ρ 2 U 2 + 2 ρ 1 ρ 2 U V + ( 1 ρ 2 ) V 2 ) {\displaystyle (XY)^{2}=U^{2}{\bigg (}\rho U+{\sqrt {(1-\rho ^{2})}}V{\bigg )}^{2}=U^{2}{\bigg (}\rho ^{2}U^{2}+2\rho {\sqrt {1-\rho ^{2}}}UV+(1-\rho ^{2})V^{2}{\bigg )}}

Removing odd-power terms, whose expectations are obviously zero, we get

E [ ( X Y ) 2 ] = ρ 2 E [ U 4 ] + ( 1 ρ 2 ) E [ U 2 ] E [ V 2 ] = 3 ρ 2 + ( 1 ρ 2 ) = 1 + 2 ρ 2 {\displaystyle \operatorname {E} [(XY)^{2}]=\rho ^{2}\operatorname {E} [U^{4}]+(1-\rho ^{2})\operatorname {E} [U^{2}]\operatorname {E} [V^{2}]=3\rho ^{2}+(1-\rho ^{2})=1+2\rho ^{2}}

Since ( E [ Z ] ) 2 = ρ 2 {\displaystyle (\operatorname {E} [Z])^{2}=\rho ^{2}} we have

Var ( Z ) = E [ Z 2 ] ( E [ Z ] ) 2 = 1 + 2 ρ 2 ρ 2 = 1 + ρ 2 {\displaystyle \operatorname {Var} (Z)=\operatorname {E} [Z^{2}]-(\operatorname {E} [Z])^{2}=1+2\rho ^{2}-\rho ^{2}=1+\rho ^{2}}

High correlation asymptote In the highly correlated case, ρ 1 {\displaystyle \rho \rightarrow 1} the product converges on the square of one sample. In this case the K 0 {\displaystyle K_{0}} asymptote is K 0 ( x ) π 2 x e x  in the limit as  x = | z | 1 ρ 2 {\displaystyle K_{0}(x)\rightarrow {\sqrt {\tfrac {\pi }{2x}}}e^{-x}{\text{ in the limit as }}x={\frac {|z|}{1-\rho ^{2}}}\rightarrow \infty } and

p ( z ) 1 π 1 ρ 2 exp ( ρ z 1 ρ 2 ) π ( 1 ρ 2 ) 2 z exp ( | z | 1 ρ 2 ) = 1 2 π z exp ( | z | + ρ z ( 1 ρ ) ( 1 + ρ ) ) = 1 2 π z exp ( z 1 + ρ ) , z > 0 1 Γ ( 1 2 ) 2 z e z 2 ,  as  ρ 1 {\displaystyle {\begin{aligned}p(z)&\rightarrow {\frac {1}{\pi {\sqrt {1-\rho ^{2}}}}}\exp \left({\frac {\rho z}{1-\rho ^{2}}}\right){\sqrt {\frac {\pi (1-\rho ^{2})}{2z}}}\exp \left(-{\frac {|z|}{1-\rho ^{2}}}\right)\\&={\frac {1}{\sqrt {2\pi z}}}\exp {\Bigg (}{\frac {-|z|+\rho z}{(1-\rho )(1+\rho )}}{\Bigg )}\\&={\frac {1}{\sqrt {2\pi z}}}\exp {\Bigg (}{\frac {-z}{1+\rho }}{\Bigg )},\;\;z>0\\&\rightarrow {\frac {1}{\Gamma ({\tfrac {1}{2}}){\sqrt {2z}}}}e^{-{\tfrac {z}{2}}},\;\;{\text{ as }}\rho \rightarrow 1\\\end{aligned}}}

which is a Chi-squared distribution with one degree of freedom.

Multiple correlated samples. Nadarajaha et al. further show that if Z 1 , Z 2 , . . Z n  are  n {\displaystyle Z_{1},Z_{2},..Z_{n}{\text{ are }}n} iid random variables sampled from f Z ( z ) {\displaystyle f_{Z}(z)} and Z ¯ = 1 n Z i {\displaystyle {\bar {Z}}={\tfrac {1}{n}}\sum Z_{i}} is their mean then

f Z ¯ ( z ) = n n / 2 2 n / 2 Γ ( n 2 ) | z | n / 2 1 exp ( β γ 2 z ) W 0 , 1 n 2 ( | z | ) , < z < . {\displaystyle f_{\bar {Z}}(z)={\frac {n^{n/2}2^{-n/2}}{\Gamma ({\frac {n}{2}})}}|z|^{n/2-1}\exp \left({\frac {\beta -\gamma }{2}}z\right){W}_{0,{\frac {1-n}{2}}}(|z|),\;\;-\infty <z<\infty .}

where W is the Whittaker function while β = n 1 ρ , γ = n 1 + ρ {\displaystyle \beta ={\frac {n}{1-\rho }},\;\;\gamma ={\frac {n}{1+\rho }}} .

Using the identity W 0 , ν ( x ) = x π K ν ( x / 2 ) , x 0 {\displaystyle W_{0,\nu }(x)={\sqrt {\frac {x}{\pi }}}K_{\nu }(x/2),\;\;x\geq 0} , see for example the DLMF compilation. eqn(13.13.9),[10] this expression can be somewhat simplified to

f z ¯ ( z ) = n n / 2 2 n / 2 Γ ( n 2 ) | z | n / 2 1 exp ( β γ 2 z ) β + γ π | z | K 1 n 2 ( β + γ 2 | z | ) , < z < . {\displaystyle f_{\bar {z}}(z)={\frac {n^{n/2}2^{-n/2}}{\Gamma ({\frac {n}{2}})}}|z|^{n/2-1}\exp \left({\frac {\beta -\gamma }{2}}z\right){\sqrt {{\frac {\beta +\gamma }{\pi }}|z|}}\;K_{\frac {1-n}{2}}\left({\frac {\beta +\gamma }{2}}|z|\right),\;\;-\infty <z<\infty .}

The pdf gives the marginal distribution of a sample bivariate normal covariance, a result also shown in the Wishart Distribution article. The approximate distribution of a correlation coefficient can be found via the Fisher transformation.

Multiple non-central correlated samples. The distribution of the product of correlated non-central normal samples was derived by Cui et al.[11] and takes the form of an infinite series of modified Bessel functions of the first kind.

Moments of product of correlated central normal samples

For a central normal distribution N(0,1) the moments are

E [ X p ] = 1 σ 2 π x p exp ( x 2 2 σ 2 ) d x = { 0 if  p  is odd, σ p ( p 1 ) ! ! if  p  is even. {\displaystyle \operatorname {E} [X^{p}]={\frac {1}{\sigma {\sqrt {2\pi }}}}\int _{-\infty }^{\infty }x^{p}\exp(-{\tfrac {x^{2}}{2\sigma ^{2}}})\,dx={\begin{cases}0&{\text{if }}p{\text{ is odd,}}\\\sigma ^{p}(p-1)!!&{\text{if }}p{\text{ is even.}}\end{cases}}}

where n ! ! {\displaystyle n!!} denotes the double factorial.

If X , Y Norm ( 0 , 1 ) {\displaystyle X,Y\sim {\text{Norm}}(0,1)} are central correlated variables, the simplest bivariate case of the multivariate normal moment problem described by Kan,[12] then

E [ X p Y q ] = { 0 if  p + q  is odd, p ! q ! 2 p + q 2 k = 0 t ( 2 ρ ) 2 k ( p 2 k ) ! ( q 2 k ) ! ( 2 k ) ! if  p  and  q  are even p ! q ! 2 p + q 2 k = 0 t ( 2 ρ ) 2 k + 1 ( p 1 2 k ) ! ( q 1 2 k ) ! ( 2 k + 1 ) ! if  p  and  q  are odd {\displaystyle \operatorname {E} [X^{p}Y^{q}]={\begin{cases}0&{\text{if }}p+q{\text{ is odd,}}\\{\frac {p!q!}{2^{\tfrac {p+q}{2}}}}\sum _{k=0}^{t}{\frac {(2\rho )^{2k}}{{\Big (}{\frac {p}{2}}-k{\Big )}!\;{\Big (}{\frac {q}{2}}-k{\Big )}!\;(2k)!}}&{\text{if }}p{\text{ and }}q{\text{ are even}}\\{\frac {p!q!}{2^{\tfrac {p+q}{2}}}}\sum _{k=0}^{t}{\frac {(2\rho )^{2k+1}}{{\Big (}{\frac {p-1}{2}}-k{\Big )}!\;{\Big (}{\frac {q-1}{2}}-k{\Big )}!\;(2k+1)!}}&{\text{if }}p{\text{ and }}q{\text{ are odd}}\end{cases}}}

where

ρ {\displaystyle \rho } is the correlation coefficient and t = min ( [ p , q ] / 2 ) {\displaystyle t=\min([p,q]/2)}

[needs checking]

Correlated non-central normal distributions

The distribution of the product of non-central correlated normal samples was derived by Cui et al.[11] and takes the form of an infinite series.

These product distributions are somewhat comparable to the Wishart distribution. The latter is the joint distribution of the four elements (actually only three independent elements) of a sample covariance matrix. If x t , y t {\displaystyle x_{t},y_{t}} are samples from a bivariate time series then the W = t = 1 K ( x t y t ) ( x t y t ) T {\displaystyle W=\sum _{t=1}^{K}{\dbinom {x_{t}}{y_{t}}}{\dbinom {x_{t}}{y_{t}}}^{T}} is a Wishart matrix with K degrees of freedom. The product distributions above are the unconditional distribution of the aggregate of K > 1 samples of W 2 , 1 {\displaystyle W_{2,1}} .

Independent complex-valued central-normal distributions

Let u 1 , v 1 , u 2 , v 2 {\displaystyle u_{1},v_{1},u_{2},v_{2}} be independent samples from a normal(0,1) distribution.
Setting z 1 = u 1 + i v 1  and  z 2 = u 2 + i v 2  then  z 1 , z 2 {\displaystyle z_{1}=u_{1}+iv_{1}{\text{ and }}z_{2}=u_{2}+iv_{2}{\text{ then }}z_{1},z_{2}} are independent zero-mean complex normal samples with circular symmetry. Their complex variances are Var | z i | = 2. {\displaystyle \operatorname {Var} |z_{i}|=2.}

The density functions of

r i | z i | = ( u i 2 + v i 2 ) 1 2 , i = 1 , 2 {\displaystyle r_{i}\equiv |z_{i}|=(u_{i}^{2}+v_{i}^{2})^{\frac {1}{2}},\;\;i=1,2} are Rayleigh distributions defined as:
f r ( r i ) = r i e r i 2 / 2  of mean  π 2  and variance 4 π 2 {\displaystyle f_{r}(r_{i})=r_{i}e^{-r_{i}^{2}/2}{\text{ of mean }}{\sqrt {\tfrac {\pi }{2}}}{\text{ and variance}}{\frac {4-\pi }{2}}}

The variable y i r i 2 {\displaystyle y_{i}\equiv r_{i}^{2}} is clearly Chi-squared with two degrees of freedom and has PDF

f y i ( y i ) = 1 2 e y i / 2  of mean value  2 {\displaystyle f_{y_{i}}(y_{i})={\tfrac {1}{2}}e^{-y_{i}/2}{\text{ of mean value }}2}

Wells et al.[13] show that the density function of s | z 1 z 2 | {\displaystyle s\equiv |z_{1}z_{2}|} is

f s ( s ) = s K 0 ( s ) , s 0 {\displaystyle f_{s}(s)=sK_{0}(s),\;\;s\geq 0}

and the cumulative distribution function of s {\displaystyle s} is

P ( a ) = Pr [ s a ] = s = 0 a s K 0 ( s ) d s = 1 a K 1 ( a ) {\displaystyle P(a)=\Pr[s\leq a]=\int _{s=0}^{a}sK_{0}(s)ds=1-aK_{1}(a)}

Thus the polar representation of the product of two uncorrelated complex Gaussian samples is

f s , θ ( s , θ ) = f s ( s ) p θ ( θ )  where  p ( θ )  is uniform on  [ 0 , 2 π ] {\displaystyle f_{s,\theta }(s,\theta )=f_{s}(s)p_{\theta }(\theta ){\text{ where }}p(\theta ){\text{ is uniform on }}[0,2\pi ]} .

The first and second moments of this distribution can be found from the integral in Normal Distributions above

m 1 = 0 s 2 K 0 ( s ) d x = 2 Γ 2 ( 3 2 ) = 2 ( π 2 ) 2 = π 2 {\displaystyle m_{1}=\int _{0}^{\infty }s^{2}K_{0}(s)\,dx=2\Gamma ^{2}({\tfrac {3}{2}})=2({\tfrac {\sqrt {\pi }}{2}})^{2}={\frac {\pi }{2}}}
m 2 = 0 s 3 K 0 ( s ) d x = 2 2 Γ 2 ( 4 2 ) = 4 {\displaystyle m_{2}=\int _{0}^{\infty }s^{3}K_{0}(s)\,dx=2^{2}\Gamma ^{2}({\tfrac {4}{2}})=4}

Thus its variance is Var ( s ) = m 2 m 1 2 = 4 π 2 4 {\displaystyle \operatorname {Var} (s)=m_{2}-m_{1}^{2}=4-{\frac {\pi ^{2}}{4}}} .

Further, the density of z s 2 = | r 1 r 2 | 2 = | r 1 | 2 | r 2 | 2 = y 1 y 2 {\displaystyle z\equiv s^{2}={|r_{1}r_{2}|}^{2}={|r_{1}|}^{2}{|r_{2}|}^{2}=y_{1}y_{2}} corresponds to the product of two independent Chi-square samples y i {\displaystyle y_{i}} each with two DoF. Writing these as scaled Gamma distributions f y ( y i ) = 1 θ Γ ( 1 ) e y i / θ  with  θ = 2 {\displaystyle f_{y}(y_{i})={\tfrac {1}{\theta \Gamma (1)}}e^{-y_{i}/\theta }{\text{ with }}\theta =2} then, from the Gamma products below, the density of the product is

f Z ( z ) = 1 2 K 0 ( z )  with expectation  E ( z ) = 4 {\displaystyle f_{Z}(z)={\tfrac {1}{2}}K_{0}({\sqrt {z}}){\text{ with expectation }}\operatorname {E} (z)=4}

Independent complex-valued noncentral normal distributions

The product of non-central independent complex Gaussians is described by O’Donoughue and Moura[14] and forms a double infinite series of modified Bessel functions of the first and second types.

Gamma distributions

The product of two independent Gamma samples, z = x 1 x 2 {\displaystyle z=x_{1}x_{2}} , defining Γ ( x ; k i , θ i ) = x k i 1 e x / θ i Γ ( k i ) θ i k i {\displaystyle \Gamma (x;k_{i},\theta _{i})={\frac {x^{k_{i}-1}e^{-x/\theta _{i}}}{\Gamma (k_{i})\theta _{i}^{k_{i}}}}} , follows[15]

p Z ( z ) = 2 Γ ( k 1 ) Γ ( k 2 ) z k 1 + k 2 2 1 ( θ 1 θ 2 ) k 1 + k 2 2 K k 1 k 2 ( 2 z θ 1 θ 2 ) = 2 Γ ( k 1 ) Γ ( k 2 ) y k 1 + k 2 2 1 θ 1 θ 2 K k 1 k 2 ( 2 y )  where  y = z θ 1 θ 2 {\displaystyle {\begin{aligned}p_{Z}(z)&={\frac {2}{\Gamma (k_{1})\Gamma (k_{2})}}{\frac {z^{{\frac {k_{1}+k_{2}}{2}}-1}}{(\theta _{1}\theta _{2})^{\frac {k_{1}+k_{2}}{2}}}}K_{k_{1}-k_{2}}\left(2{\sqrt {\frac {z}{\theta _{1}\theta _{2}}}}\right)\\\\&={\frac {2}{\Gamma (k_{1})\Gamma (k_{2})}}{\frac {y^{{\frac {k_{1}+k_{2}}{2}}-1}}{\theta _{1}\theta _{2}}}K_{k_{1}-k_{2}}\left(2{\sqrt {y}}\right){\text{ where }}y={\frac {z}{\theta _{1}\theta _{2}}}\\\end{aligned}}}

Beta distributions

Nagar et al.[16] define a correlated bivariate beta distribution

f ( x , y ) = x a 1 y b 1 ( 1 x ) b + c 1 ( 1 y ) a + c 1 B ( a , b , c ) ( 1 x y ) a + b + c , 0 < x , y < 1 {\displaystyle f(x,y)={\frac {x^{a-1}y^{b-1}(1-x)^{b+c-1}(1-y)^{a+c-1}}{B(a,b,c)(1-xy)^{a+b+c}}},\;\;\;0<x,y<1}

where

B ( a , b , c ) = Γ ( a ) Γ ( b ) Γ ( c ) Γ ( a + b + c ) {\displaystyle B(a,b,c)={\frac {\Gamma (a)\Gamma (b)\Gamma (c)}{\Gamma (a+b+c)}}}

Then the pdf of Z = XY is given by

f Z ( z ) = B ( a + c , b + c ) z a 1 ( 1 z ) c 1 B ( a , b , c ) 2 F 1 ( a + c , a + c ; a + b + 2 c ; 1 z ) , 0 < z < 1 {\displaystyle f_{Z}(z)={\frac {B(a+c,b+c)z^{a-1}(1-z)^{c-1}}{B(a,b,c)}}{_{2}F_{1}}(a+c,a+c;a+b+2c;1-z),\;\;\;0<z<1}

where 2 F 1 {\displaystyle {_{2}F_{1}}} is the Gauss hypergeometric function defined by the Euler integral

2 F 1 ( a , b , c , z ) = Γ ( c ) Γ ( a ) Γ ( c a ) 0 1 v a 1 ( 1 v ) c a 1 ( 1 v z ) b d v {\displaystyle {_{2}F_{1}}(a,b,c,z)={\frac {\Gamma (c)}{\Gamma (a)\Gamma (c-a)}}\int _{0}^{1}v^{a-1}(1-v)^{c-a-1}(1-vz)^{-b}\,dv}

Note that multivariate distributions are not generally unique, apart from the Gaussian case, and there may be alternatives.

Uniform and gamma distributions

The distribution of the product of a random variable having a uniform distribution on (0,1) with a random variable having a gamma distribution with shape parameter equal to 2, is an exponential distribution.[17] A more general case of this concerns the distribution of the product of a random variable having a beta distribution with a random variable having a gamma distribution: for some cases where the parameters of the two component distributions are related in a certain way, the result is again a gamma distribution but with a changed shape parameter.[17]

The K-distribution is an example of a non-standard distribution that can be defined as a product distribution (where both components have a gamma distribution).

Gamma and Pareto distributions

The product of n Gamma and m Pareto independent samples was derived by Nadarajah.[18]

See also

Notes

  1. ^ Springer, Melvin Dale (1979). The Algebra of Random Variables. Wiley. ISBN 978-0-471-01406-5. Retrieved 24 September 2012.
  2. ^ Rohatgi, V. K. (1976). An Introduction to Probability Theory and Mathematical Statistics. Wiley Series in Probability and Statistics. New York: Wiley. doi:10.1002/9781118165676. ISBN 978-0-19-853185-2.
  3. ^ Grimmett, G. R.; Stirzaker, D.R. (2001). Probability and Random Processes. Oxford: Oxford University Press. ISBN 978-0-19-857222-0. Retrieved 4 October 2015.
  4. ^ Goodman, Leo A. (1960). "On the Exact Variance of Products". Journal of the American Statistical Association. 55 (292): 708–713. doi:10.2307/2281592. JSTOR 2281592.
  5. ^ Sarwate, Dilip (March 9, 2013). "Variance of product of multiple random variables". Stack Exchange.
  6. ^ "How to find characteristic function of product of random variables". Stack Exchange. January 3, 2013.
  7. ^ heropup (1 February 2014). "product distribution of two uniform distribution, what about 3 or more". Stack Exchange.
  8. ^ Gradsheyn, I S; Ryzhik, I M (1980). Tables of Integrals, Series and Products. Academic Press. pp. section 6.561.
  9. ^ Nadarajah, Saralees; Pogány, Tibor (2015). "On the distribution of the product of correlated normal random variables". Comptes Rendus de l'Académie des Sciences, Série I. 354 (2): 201–204. doi:10.1016/j.crma.2015.10.019.
  10. ^ Equ(13.18.9). "Digital Library of Mathematical Functions". NIST: National Institute of Standards and Technology.{{cite web}}: CS1 maint: numeric names: authors list (link)
  11. ^ a b Cui, Guolong (2016). "Exact Distribution for the Product of Two Correlated Gaussian Random Variables". IEEE Signal Processing Letters. 23 (11): 1662–1666. Bibcode:2016ISPL...23.1662C. doi:10.1109/LSP.2016.2614539. S2CID 15721509.
  12. ^ Kan, Raymond (2008). "From moments of sum to moments of product". Journal of Multivariate Analysis. 99 (3): 542–554. doi:10.1016/j.jmva.2007.01.013.
  13. ^ Wells, R T; Anderson, R L; Cell, J W (1962). "The Distribution of the Product of Two Central or Non-Central Chi-Square Variates". The Annals of Mathematical Statistics. 33 (3): 1016–1020. doi:10.1214/aoms/1177704469.
  14. ^ O’Donoughue, N; Moura, J M F (March 2012). "On the Product of Independent Complex Gaussians". IEEE Transactions on Signal Processing. 60 (3): 1050–1063. Bibcode:2012ITSP...60.1050O. doi:10.1109/TSP.2011.2177264. S2CID 1069298.
  15. ^ Wolfies (August 2017). "PDF of the product of two independent Gamma random variables". stackexchange.
  16. ^ Nagar, D K; Orozco-Castañeda, J M; Gupta, A K (2009). "Product and quotient of correlated beta variables". Applied Mathematics Letters. 22: 105–109. doi:10.1016/j.aml.2008.02.014.
  17. ^ a b Johnson, Norman L.; Kotz, Samuel; Balakrishnan, N. (1995). Continuous Univariate Distributions Volume 2, Second edition. Wiley. p. 306. ISBN 978-0-471-58494-0. Retrieved 24 September 2012.
  18. ^ Nadarajah, Saralees (June 2011). "Exact distribution of the product of n gamma and m Pareto random variables". Journal of Computational and Applied Mathematics. 235 (15): 4496–4512. doi:10.1016/j.cam.2011.04.018.

References

  • Springer, Melvin Dale; Thompson, W. E. (1970). "The distribution of products of beta, gamma and Gaussian random variables". SIAM Journal on Applied Mathematics. 18 (4): 721–737. doi:10.1137/0118065. JSTOR 2099424.
  • Springer, Melvin Dale; Thompson, W. E. (1966). "The distribution of products of independent random variables". SIAM Journal on Applied Mathematics. 14 (3): 511–526. doi:10.1137/0114046. JSTOR 2946226.