U-statistic

Class of statistics in estimation theory

In statistical theory, a U-statistic is a class of statistics defined as the average over the application of a given function applied to all tuples of a fixed size. The letter "U" stands for unbiased. In elementary statistics, U-statistics arise naturally in producing minimum-variance unbiased estimators.

The theory of U-statistics allows a minimum-variance unbiased estimator to be derived from each unbiased estimator of an estimable parameter (alternatively, statistical functional) for large classes of probability distributions.[1][2] An estimable parameter is a measurable function of the population's cumulative probability distribution: For example, for every probability distribution, the population median is an estimable parameter. The theory of U-statistics applies to general classes of probability distributions.

History

Many statistics originally derived for particular parametric families have been recognized as U-statistics for general distributions. In non-parametric statistics, the theory of U-statistics is used to establish for statistical procedures (such as estimators and tests) and estimators relating to the asymptotic normality and to the variance (in finite samples) of such quantities.[3] The theory has been used to study more general statistics as well as stochastic processes, such as random graphs.[4][5][6]

Suppose that a problem involves independent and identically-distributed random variables and that estimation of a certain parameter is required. Suppose that a simple unbiased estimate can be constructed based on only a few observations: this defines the basic estimator based on a given number of observations. For example, a single observation is itself an unbiased estimate of the mean and a pair of observations can be used to derive an unbiased estimate of the variance. The U-statistic based on this estimator is defined as the average (across all combinatorial selections of the given size from the full set of observations) of the basic estimator applied to the sub-samples.

Pranab K. Sen (1992) provides a review of the paper by Wassily Hoeffding (1948), which introduced U-statistics and set out the theory relating to them, and in doing so Sen outlines the importance U-statistics have in statistical theory. Sen says,[7] “The impact of Hoeffding (1948) is overwhelming at the present time and is very likely to continue in the years to come.” Note that the theory of U-statistics is not limited to[8] the case of independent and identically-distributed random variables or to scalar random-variables.[9]

Definition

The term U-statistic, due to Hoeffding (1948), is defined as follows.

Let K {\displaystyle K} be either the real or complex numbers, and let f : ( K d ) r K {\displaystyle f\colon (K^{d})^{r}\to K} be a K {\displaystyle K} -valued function of r {\displaystyle r} d {\displaystyle d} -dimensional variables. For each n r {\displaystyle n\geq r} the associated U-statistic f n : ( K d ) n K {\displaystyle f_{n}\colon (K^{d})^{n}\to K} is defined to be the average of the values f ( x i 1 , , x i r ) {\displaystyle f(x_{i_{1}},\dotsc ,x_{i_{r}})} over the set I r , n {\displaystyle I_{r,n}} of r {\displaystyle r} -tuples of indices from { 1 , 2 , , n } {\displaystyle \{1,2,\dotsc ,n\}} with distinct entries. Formally,

f n ( x 1 , , x n ) = 1 i = 0 r 1 ( n i ) ( i 1 , , i r ) I r , n f ( x i 1 , , x i r ) {\displaystyle f_{n}(x_{1},\dotsc ,x_{n})={\frac {1}{\prod _{i=0}^{r-1}(n-i)}}\sum _{(i_{1},\dotsc ,i_{r})\in I_{r,n}}f(x_{i_{1}},\dotsc ,x_{i_{r}})} .

In particular, if f {\displaystyle f} is symmetric the above is simplified to

f n ( x 1 , , x n ) = 1 ( n r ) ( i 1 , , i r ) J r , n f ( x i 1 , , x i r ) {\displaystyle f_{n}(x_{1},\dotsc ,x_{n})={\frac {1}{\binom {n}{r}}}\sum _{(i_{1},\dotsc ,i_{r})\in J_{r,n}}f(x_{i_{1}},\dotsc ,x_{i_{r}})} ,

where now J r , n {\displaystyle J_{r,n}} denotes the subset of I r , n {\displaystyle I_{r,n}} of increasing tuples.

Each U-statistic f n {\displaystyle f_{n}} is necessarily a symmetric function.

U-statistics are very natural in statistical work, particularly in Hoeffding's context of independent and identically distributed random variables, or more generally for exchangeable sequences, such as in simple random sampling from a finite population, where the defining property is termed ‘inheritance on the average’.

Fisher's k-statistics and Tukey's polykays are examples of homogeneous polynomial U-statistics (Fisher, 1929; Tukey, 1950).

For a simple random sample φ of size n taken from a population of size N, the U-statistic has the property that the average over sample values ƒn() is exactly equal to the population value ƒN(x).[clarification needed]

Examples

Some examples: If f ( x ) = x {\displaystyle f(x)=x} the U-statistic f n ( x ) = x ¯ n = ( x 1 + + x n ) / n {\displaystyle f_{n}(x)={\bar {x}}_{n}=(x_{1}+\cdots +x_{n})/n} is the sample mean.

If f ( x 1 , x 2 ) = | x 1 x 2 | {\displaystyle f(x_{1},x_{2})=|x_{1}-x_{2}|} , the U-statistic is the mean pairwise deviation f n ( x 1 , , x n ) = 2 / ( n ( n 1 ) ) i > j | x i x j | {\displaystyle f_{n}(x_{1},\ldots ,x_{n})=2/(n(n-1))\sum _{i>j}|x_{i}-x_{j}|} , defined for n 2 {\displaystyle n\geq 2} .

If f ( x 1 , x 2 ) = ( x 1 x 2 ) 2 / 2 {\displaystyle f(x_{1},x_{2})=(x_{1}-x_{2})^{2}/2} , the U-statistic is the sample variance f n ( x ) = ( x i x ¯ n ) 2 / ( n 1 ) {\displaystyle f_{n}(x)=\sum (x_{i}-{\bar {x}}_{n})^{2}/(n-1)} with divisor n 1 {\displaystyle n-1} , defined for n 2 {\displaystyle n\geq 2} .

The third k {\displaystyle k} -statistic k 3 , n ( x ) = ( x i x ¯ n ) 3 n / ( ( n 1 ) ( n 2 ) ) {\displaystyle k_{3,n}(x)=\sum (x_{i}-{\bar {x}}_{n})^{3}n/((n-1)(n-2))} , the sample skewness defined for n 3 {\displaystyle n\geq 3} , is a U-statistic.

The following case highlights an important point. If f ( x 1 , x 2 , x 3 ) {\displaystyle f(x_{1},x_{2},x_{3})} is the median of three values, f n ( x 1 , , x n ) {\displaystyle f_{n}(x_{1},\ldots ,x_{n})} is not the median of n {\displaystyle n} values. However, it is a minimum variance unbiased estimate of the expected value of the median of three values, not the median of the population. Similar estimates play a central role where the parameters of a family of probability distributions are being estimated by probability weighted moments or L-moments.

See also

Notes

  1. ^ Cox & Hinkley (1974), p. 200, p. 258
  2. ^ Hoeffding (1948), between Eq's(4.3),(4.4)
  3. ^ Sen (1992)
  4. ^ Page 508 in Koroljuk, V. S.; Borovskich, Yu. V. (1994). Theory of U-statistics. Mathematics and its Applications. Vol. 273 (Translated by P. V. Malyshev and D. V. Malyshev from the 1989 Russian original ed.). Dordrecht: Kluwer Academic Publishers Group. pp. x+552. ISBN 0-7923-2608-3. MR 1472486.
  5. ^ Pages 381–382 in Borovskikh, Yu. V. (1996). U-statistics in Banach spaces. Utrecht: VSP. pp. xii+420. ISBN 90-6764-200-2. MR 1419498.
  6. ^ Page xii in Kwapień, Stanisƚaw; Woyczyński, Wojbor A. (1992). Random series and stochastic integrals: Single and multiple. Probability and its Applications. Boston, MA: Birkhäuser Boston, Inc. pp. xvi+360. ISBN 0-8176-3572-6. MR 1167198.
  7. ^ Sen (1992) p. 307
  8. ^ Sen (1992), p306
  9. ^ Borovskikh's last chapter discusses U-statistics for exchangeable random elements taking values in a vector space (separable Banach space).

References

  • Borovskikh, Yu. V. (1996). U-statistics in Banach spaces. Utrecht: VSP. pp. xii+420. ISBN 90-6764-200-2. MR 1419498.
  • Cox, D. R., Hinkley, D. V. (1974) Theoretical statistics. Chapman and Hall. ISBN 0-412-12420-3
  • Fisher, R. A. (1929) Moments and product moments of sampling distributions. Proceedings of the London Mathematical Society, 2, 30:199–238.
  • Hoeffding, W. (1948) A class of statistics with asymptotically normal distributions. Annals of Statistics, 19:293–325. (Partially reprinted in: Kotz, S., Johnson, N. L. (1992) Breakthroughs in Statistics, Vol I, pp 308–334. Springer-Verlag. ISBN 0-387-94037-5)
  • Koroljuk, V. S.; Borovskich, Yu. V. (1994). Theory of U-statistics. Mathematics and its Applications. Vol. 273 (Translated by P. V. Malyshev and D. V. Malyshev from the 1989 Russian original ed.). Dordrecht: Kluwer Academic Publishers Group. pp. x+552. ISBN 0-7923-2608-3. MR 1472486.
  • Lee, A. J. (1990) U-Statistics: Theory and Practice. Marcel Dekker, New York. pp320 ISBN 0-8247-8253-4
  • Sen, P. K. (1992) Introduction to Hoeffding (1948) A Class of Statistics with Asymptotically Normal Distribution. In: Kotz, S., Johnson, N. L. Breakthroughs in Statistics, Vol I, pp 299–307. Springer-Verlag. ISBN 0-387-94037-5.
  • Serfling, Robert J. (1980). Approximation theorems of mathematical statistics. New York: John Wiley and Sons. ISBN 0-471-02403-1.
  • Tukey, J. W. (1950). "Some Sampling Simplified". Journal of the American Statistical Association. 45 (252): 501–519. doi:10.1080/01621459.1950.10501142.
  • Halmos, P. (1946). "The Theory of Unbiased Estimation". Annals of Mathematical Statistics. 1 (17): 34–43. doi:10.1214/aoms/1177731020.
  • v
  • t
  • e
Continuous data
Center
Dispersion
Shape
Count data
Summary tables
Dependence
Graphics
Study design
Survey methodology
Controlled experiments
Adaptive designs
Observational studies
Statistical theory
Frequentist inference
Point estimation
Interval estimation
Testing hypotheses
Parametric tests
Specific tests
  • Z-test (normal)
  • Student's t-test
  • F-test
Goodness of fit
Rank statistics
Bayesian inference
Correlation
Regression analysis
Linear regression
Non-standard predictors
Generalized linear model
Partition of variance
Categorical
Multivariate
Time-series
General
Specific tests
Time domain
Frequency domain
Survival
Survival function
Hazard function
Test
Biostatistics
Engineering statistics
Social statistics
Spatial statistics
  • Category
  • icon Mathematics portal
  • Commons
  • WikiProject