Popoviciu's inequality on variances

In probability theory, Popoviciu's inequality, named after Tiberiu Popoviciu, is an upper bound on the variance σ2 of any bounded probability distribution. Let M and m be upper and lower bounds on the values of any random variable with a particular probability distribution. Then Popoviciu's inequality states:[1]

σ 2 1 4 ( M m ) 2 . {\displaystyle \sigma ^{2}\leq {\frac {1}{4}}(M-m)^{2}.}

This equality holds precisely when half of the probability is concentrated at each of the two bounds.

Sharma et al. have sharpened Popoviciu's inequality:[2]

σ 2 + ( Third central moment 2 σ 2 ) 2 1 4 ( M m ) 2 . {\displaystyle {\sigma ^{2}+\left({\frac {\text{Third central moment}}{2\sigma ^{2}}}\right)^{2}}\leq {\frac {1}{4}}(M-m)^{2}.}

If one additionally assumes knowledge of the expectation, then the stronger Bhatia–Davis inequality holds

σ 2 ( M μ ) ( μ m ) {\displaystyle \sigma ^{2}\leq (M-\mu )(\mu -m)}

where μ is the expectation of the random variable.[3]

In the case of an independent sample of n observations from a bounded probability distribution, the von Szokefalvi Nagy inequality[4] gives a lower bound to the variance of the sample mean:

σ 2 ( M m ) 2 2 n . {\displaystyle \sigma ^{2}\geq {\frac {(M-m)^{2}}{2n}}.}

Proof via the Bhatia–Davis inequality

Let A {\displaystyle A} be a random variable with mean μ {\displaystyle \mu } , variance σ 2 {\displaystyle \sigma ^{2}} , and Pr ( m A M ) = 1 {\displaystyle \Pr(m\leq A\leq M)=1} . Then, since m A M {\displaystyle m\leq A\leq M} ,

0 E [ ( M A ) ( A m ) ] = E [ A 2 ] m M + ( m + M ) μ {\displaystyle 0\leq \mathbb {E} [(M-A)(A-m)]=-\mathbb {E} [A^{2}]-mM+(m+M)\mu } .

Thus,

σ 2 = E [ A 2 ] μ 2 m M + ( m + M ) μ μ 2 = ( M μ ) ( μ m ) {\displaystyle \sigma ^{2}=\mathbb {E} [A^{2}]-\mu ^{2}\leq -mM+(m+M)\mu -\mu ^{2}=(M-\mu )(\mu -m)} .

Now, applying the Inequality of arithmetic and geometric means, a b ( a + b 2 ) 2 {\displaystyle ab\leq \left({\frac {a+b}{2}}\right)^{2}} , with a = M μ {\displaystyle a=M-\mu } and b = μ m {\displaystyle b=\mu -m} , yields the desired result:

σ 2 ( M μ ) ( μ m ) ( M m ) 2 4 {\displaystyle \sigma ^{2}\leq (M-\mu )(\mu -m)\leq {\frac {\left(M-m\right)^{2}}{4}}} .

References

  1. ^ Popoviciu, T. (1935). "Sur les équations algébriques ayant toutes leurs racines réelles". Mathematica (Cluj). 9: 129–145.
  2. ^ Sharma, R., Gupta, M., Kapoor, G. (2010). "Some better bounds on the variance with applications". Journal of Mathematical Inequalities. 4 (3): 355–363. doi:10.7153/jmi-04-32.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  3. ^ Bhatia, Rajendra; Davis, Chandler (April 2000). "A Better Bound on the Variance". American Mathematical Monthly. 107 (4). Mathematical Association of America: 353–357. doi:10.2307/2589180. ISSN 0002-9890. JSTOR 2589180.
  4. ^ Nagy, Julius (1918). "Über algebraische Gleichungen mit lauter reellen Wurzeln". Jahresbericht der Deutschen Mathematiker-Vereinigung. 27: 37–43.


  • v
  • t
  • e