Kendall rank correlation coefficient

Statistic for rank correlation

In statistics, the Kendall rank correlation coefficient, commonly referred to as Kendall's τ coefficient (after the Greek letter τ, tau), is a statistic used to measure the ordinal association between two measured quantities. A τ test is a non-parametric hypothesis test for statistical dependence based on the τ coefficient. It is a measure of rank correlation: the similarity of the orderings of the data when ranked by each of the quantities. It is named after Maurice Kendall, who developed it in 1938,[1] though Gustav Fechner had proposed a similar measure in the context of time series in 1897.[2]

Intuitively, the Kendall correlation between two variables will be high when observations have a similar (or identical for a correlation of 1) rank (i.e. relative position label of the observations within the variable: 1st, 2nd, 3rd, etc.) between the two variables, and low when observations have a dissimilar (or fully different for a correlation of −1) rank between the two variables.

Both Kendall's τ {\displaystyle \tau } and Spearman's ρ {\displaystyle \rho } can be formulated as special cases of a more general correlation coefficient. Its notions of concordance and discordance also appear in other areas of statistics, like the Rand index in cluster analysis.

Definition

All points in the gray area are concordant and all points in the white area are discordant with respect to point ( X 1 , Y 1 ) {\displaystyle (X_{1},Y_{1})} . With n = 30 {\displaystyle n=30} points, there are a total of ( 30 2 ) = 435 {\displaystyle {\binom {30}{2}}=435} possible point pairs. In this example there are 395 concordant point pairs and 40 discordant point pairs, leading to a Kendall rank correlation coefficient of 0.816.

Let ( x 1 , y 1 ) , . . . , ( x n , y n ) {\displaystyle (x_{1},y_{1}),...,(x_{n},y_{n})} be a set of observations of the joint random variables X and Y, such that all the values of ( x i {\displaystyle x_{i}} ) and ( y i {\displaystyle y_{i}} ) are unique (ties are neglected for simplicity). Any pair of observations ( x i , y i ) {\displaystyle (x_{i},y_{i})} and ( x j , y j ) {\displaystyle (x_{j},y_{j})} , where i < j {\displaystyle i<j} , are said to be concordant if the sort order of ( x i , x j ) {\displaystyle (x_{i},x_{j})} and ( y i , y j ) {\displaystyle (y_{i},y_{j})} agrees: that is, if either both x i > x j {\displaystyle x_{i}>x_{j}} and y i > y j {\displaystyle y_{i}>y_{j}} holds or both x i < x j {\displaystyle x_{i}<x_{j}} and y i < y j {\displaystyle y_{i}<y_{j}} ; otherwise they are said to be discordant.

The Kendall τ coefficient is defined as:

τ = ( number of concordant pairs ) ( number of discordant pairs ) ( number of pairs ) = 1 2 ( number of discordant pairs ) ( n 2 ) . {\displaystyle \tau ={\frac {({\text{number of concordant pairs}})-({\text{number of discordant pairs}})}{({\text{number of pairs}})}}=1-{\frac {2({\text{number of discordant pairs}})}{n \choose 2}}.} [3]

where ( n 2 ) = n ( n 1 ) 2 {\displaystyle {n \choose 2}={n(n-1) \over 2}} is the binomial coefficient for the number of ways to choose two items from n items.

The number of discordant pairs is equal to the inversion number that permutes the y-sequence into the same order as the x-sequence.

Properties

The denominator is the total number of pair combinations, so the coefficient must be in the range −1 ≤ τ ≤ 1.

  • If the agreement between the two rankings is perfect (i.e., the two rankings are the same) the coefficient has value 1.
  • If the disagreement between the two rankings is perfect (i.e., one ranking is the reverse of the other) the coefficient has value −1.
  • If X and Y are independent and not constant, then the expectation of the coefficient is zero.
  • An explicit expression for Kendall's rank coefficient is τ = 2 n ( n 1 ) i < j sgn ( x i x j ) sgn ( y i y j ) {\displaystyle \tau ={\frac {2}{n(n-1)}}\sum _{i<j}\operatorname {sgn}(x_{i}-x_{j})\operatorname {sgn}(y_{i}-y_{j})} .

Hypothesis test

The Kendall rank coefficient is often used as a test statistic in a statistical hypothesis test to establish whether two variables may be regarded as statistically dependent. This test is non-parametric, as it does not rely on any assumptions on the distributions of X or Y or the distribution of (X,Y).

Under the null hypothesis of independence of X and Y, the sampling distribution of τ has an expected value of zero. The precise distribution cannot be characterized in terms of common distributions, but may be calculated exactly for small samples; for larger samples, it is common to use an approximation to the normal distribution, with mean zero and variance 2 ( 2 n + 5 ) / 9 n ( n 1 ) {\textstyle 2(2n+5)/9n(n-1)} .[4]

The following proof is from Valz & McLeod (1990;[5] 1995[6]).

Proof
Proof

WLOG, we reorder the data pairs, so that x 1 < x 2 < < x n {\textstyle x_{1}<x_{2}<\cdots <x_{n}} . By assumption of independence, the order of y 1 , . . . , y n {\textstyle y_{1},...,y_{n}} is a permutation sampled uniformly at random from S n {\textstyle S_{n}} , the permutation group on 1 : n {\textstyle 1:n} .

For each permutation, its unique l {\textstyle l} inversion code is l 0 l 1 l n 1 {\textstyle l_{0}l_{1}\cdots l_{n-1}} such that each l i {\textstyle l_{i}} is in the range 0 : i {\textstyle 0:i} . Sampling a permutation uniformly is equivalent to sampling a l {\textstyle l} -inversion code uniformly, which is equivalent to sampling each l i {\textstyle l_{i}} uniformly and independently.

Then we have

E [ τ A 2 ] = E [ ( 1 4 i l i n ( n 1 ) ) 2 ] = 1 8 n ( n 1 ) i E [ l i ] + 16 n 2 ( n 1 ) 2 i j E [ l i l j ] = 1 8 n ( n 1 ) i E [ l i ] + 16 n 2 ( n 1 ) 2 ( i j E [ l i ] E [ l j ] + i V [ l i ] ) = 1 8 n ( n 1 ) i E [ l i ] + 16 n 2 ( n 1 ) 2 i j E [ l i ] E [ l j ] + 16 n 2 ( n 1 ) 2 ( i V [ l i ] ) = ( 1 4 i E [ l i ] n ( n 1 ) ) 2 + 16 n 2 ( n 1 ) 2 ( i V [ l i ] ) {\displaystyle {\begin{aligned}E[\tau _{A}^{2}]&=E\left[\left(1-{\frac {4\sum _{i}l_{i}}{n(n-1)}}\right)^{2}\right]\\&=1-{\frac {8}{n(n-1)}}\sum _{i}E[l_{i}]+{\frac {16}{n^{2}(n-1)^{2}}}\sum _{ij}E[l_{i}l_{j}]\\&=1-{\frac {8}{n(n-1)}}\sum _{i}E[l_{i}]+{\frac {16}{n^{2}(n-1)^{2}}}\left(\sum _{ij}E[l_{i}]E[l_{j}]+\sum _{i}V[l_{i}]\right)\\&=1-{\frac {8}{n(n-1)}}\sum _{i}E[l_{i}]+{\frac {16}{n^{2}(n-1)^{2}}}\sum _{ij}E[l_{i}]E[l_{j}]+{\frac {16}{n^{2}(n-1)^{2}}}\left(\sum _{i}V[l_{i}]\right)\\&=\left(1-{\frac {4\sum _{i}E[l_{i}]}{n(n-1)}}\right)^{2}+{\frac {16}{n^{2}(n-1)^{2}}}\left(\sum _{i}V[l_{i}]\right)\end{aligned}}}

The first term is just E [ τ A ] 2 = 0 {\textstyle E[\tau _{A}]^{2}=0} . The second term can be calculated by noting that l i {\textstyle l_{i}} is a uniform random variable on 0 : i {\textstyle 0:i} , so E [ l i ] = i 2 {\textstyle E[l_{i}]={\frac {i}{2}}} and E [ l i 2 ] = 0 2 + + i 2 i + 1 = i ( 2 i + 1 ) 6 {\textstyle E[l_{i}^{2}]={\frac {0^{2}+\cdots +i^{2}}{i+1}}={\frac {i(2i+1)}{6}}} , then using the sum of squares formula again.

Asymptotic normality — At the n {\textstyle n\to \infty } limit, z A = τ A V a r [ τ A ] = n C n D n ( n 1 ) ( 2 n + 5 ) / 18 {\textstyle z_{A}={\frac {\tau _{A}}{\sqrt {Var[\tau _{A}]}}}={n_{C}-n_{D} \over {\sqrt {n(n-1)(2n+5)/18}}}} converges in distribution to the standard normal distribution.

Proof

Use a result from A class of statistics with asymptotically normal distribution Hoeffding (1948).[7]

Case of standard normal distributions

If ( x 1 , y 1 ) , ( x 2 , y 2 ) , . . . , ( x n , y n ) {\textstyle (x_{1},y_{1}),(x_{2},y_{2}),...,(x_{n},y_{n})} are IID samples from the same jointly normal distribution with a known Pearson correlation coefficient r {\textstyle r} , then the expectation of Kendall rank correlation has a closed-form formula.[8]

Greiner's equality — If X , Y {\textstyle X,Y} are jointly normal, with correlation r {\textstyle r} , then

r = sin ( π 2 E [ τ A ] ) {\displaystyle r=\sin {\left({\frac {\pi }{2}}E[\tau _{A}]\right)}}

The name is credited to Richard Greiner (1909)[9] by P. A. P. Moran.[10]

Proof
Proof[11]

Define the following quantities.

  • A + := { ( Δ x , Δ y ) : Δ x Δ y > 0 } {\textstyle A^{+}:=\{(\Delta x,\Delta y):\Delta x\Delta y>0\}}
  • Δ i , j := ( x i x j , y i y j ) {\textstyle \Delta _{i,j}:=(x_{i}-x_{j},y_{i}-y_{j})} is a point in R 2 {\textstyle \mathbb {R} ^{2}} .

In the notation, we see that the number of concordant pairs, n C {\textstyle n_{C}} , is equal to the number of Δ i , j {\textstyle \Delta _{i,j}} that fall in the subset A + {\textstyle A^{+}} . That is, n C = 1 i < j n 1 Δ i , j A + {\textstyle n_{C}=\sum _{1\leq i<j\leq n}1_{\Delta _{i,j}\in A^{+}}} .

Thus,

E [ τ A ] = 4 n ( n 1 ) E [ n C ] 1 = 4 n ( n 1 ) 1 i < j n P r ( Δ i , j A + ) 1 {\displaystyle E[\tau _{A}]={\frac {4}{n(n-1)}}E[n_{C}]-1={\frac {4}{n(n-1)}}\sum _{1\leq i<j\leq n}Pr(\Delta _{i,j}\in A^{+})-1}

Since each ( x i , y i ) {\textstyle (x_{i},y_{i})} is an IID sample of the jointly normal distribution, the pairing does not matter, so each term in the summation is exactly the same, and so

E [ τ A ] = 2 P r ( Δ 1 , 2 A + ) 1 {\displaystyle E[\tau _{A}]=2Pr(\Delta _{1,2}\in A^{+})-1}
and it remains to calculate the probability. We perform this by repeated affine transforms.

First normalize X , Y {\textstyle X,Y} by subtracting the mean and dividing the standard deviation. This does not change τ A {\textstyle \tau _{A}} . This gives us

[ x y ] = [ 1 r r 1 ] 1 / 2 [ z w ] {\displaystyle {\begin{bmatrix}x\\y\end{bmatrix}}={\begin{bmatrix}1&r\\r&1\end{bmatrix}}^{1/2}{\begin{bmatrix}z\\w\end{bmatrix}}}
where ( Z , W ) {\textstyle (Z,W)} is sampled from the standard normal distribution on R 2 {\textstyle \mathbb {R} ^{2}} .

Thus,

Δ 1 , 2 = 2 [ 1 r r 1 ] 1 / 2 [ ( z 1 z 2 ) / 2 ( w 1 w 2 ) / 2 ] {\displaystyle \Delta _{1,2}={\sqrt {2}}{\begin{bmatrix}1&r\\r&1\end{bmatrix}}^{1/2}{\begin{bmatrix}(z_{1}-z_{2})/{\sqrt {2}}\\(w_{1}-w_{2})/{\sqrt {2}}\end{bmatrix}}}
where the vector [ ( z 1 z 2 ) / 2 ( w 1 w 2 ) / 2 ] {\textstyle {\begin{bmatrix}(z_{1}-z_{2})/{\sqrt {2}}\\(w_{1}-w_{2})/{\sqrt {2}}\end{bmatrix}}} is still distributed as the standard normal distribution on R 2 {\textstyle \mathbb {R} ^{2}} . It remains to perform some unenlightening tedious matrix exponentiations and trigonometry, which can be skipped over.

Thus, Δ 1 , 2 A + {\textstyle \Delta _{1,2}\in A^{+}} iff

[ ( z 1 z 2 ) / 2 ( w 1 w 2 ) / 2 ] 1 2 [ 1 r r 1 ] 1 / 2 A + = 1 2 2 [ 1 1 + r + 1 1 r 1 1 + r 1 1 r 1 1 + r 1 1 r 1 1 + r + 1 1 r ] A + {\displaystyle {\begin{bmatrix}(z_{1}-z_{2})/{\sqrt {2}}\\(w_{1}-w_{2})/{\sqrt {2}}\end{bmatrix}}\in {\frac {1}{\sqrt {2}}}{\begin{bmatrix}1&r\\r&1\end{bmatrix}}^{-1/2}A^{+}={\frac {1}{2{\sqrt {2}}}}{\begin{bmatrix}{\frac {1}{\sqrt {1+r}}}+{\frac {1}{\sqrt {1-r}}}&{\frac {1}{\sqrt {1+r}}}-{\frac {1}{\sqrt {1-r}}}\\{\frac {1}{\sqrt {1+r}}}-{\frac {1}{\sqrt {1-r}}}&{\frac {1}{\sqrt {1+r}}}+{\frac {1}{\sqrt {1-r}}}\end{bmatrix}}A^{+}}
where the subset on the right is a “squashed” version of two quadrants. Since the standard normal distribution is rotationally symmetric, we need only calculate the angle spanned by each squashed quadrant.

The first quadrant is the sector bounded by the two rays ( 1 , 0 ) , ( 0 , 1 ) {\textstyle (1,0),(0,1)} . It is transformed to the sector bounded by the two rays ( 1 1 + r + 1 1 r , 1 1 + r 1 1 r ) {\textstyle ({\frac {1}{\sqrt {1+r}}}+{\frac {1}{\sqrt {1-r}}},{\frac {1}{\sqrt {1+r}}}-{\frac {1}{\sqrt {1-r}}})} and ( 1 1 + r 1 1 r , 1 1 + r + 1 1 r ) {\textstyle ({\frac {1}{\sqrt {1+r}}}-{\frac {1}{\sqrt {1-r}}},{\frac {1}{\sqrt {1+r}}}+{\frac {1}{\sqrt {1-r}}})} . They respectively make angle θ {\textstyle \theta } with the horizontal and vertical axis, where

θ = arctan 1 1 + r 1 1 r 1 1 + r + 1 1 r {\displaystyle \theta =\arctan {\frac {{\frac {1}{\sqrt {1+r}}}-{\frac {1}{\sqrt {1-r}}}}{{\frac {1}{\sqrt {1+r}}}+{\frac {1}{\sqrt {1-r}}}}}}

Together, the two transformed quadrants span an angle of π + 4 θ {\textstyle \pi +4\theta } , so

P r ( Δ 1 , 2 A + ) = π + 4 θ 2 π {\displaystyle Pr(\Delta _{1,2}\in A^{+})={\frac {\pi +4\theta }{2\pi }}}
and therefore
sin ( π 2 E [ τ A ] ) = sin ( 2 θ ) = r {\displaystyle \sin {\left({\frac {\pi }{2}}E[\tau _{A}]\right)}=\sin(2\theta )=r}

Accounting for ties

A pair { ( x i , y i ) , ( x j , y j ) } {\displaystyle \{(x_{i},y_{i}),(x_{j},y_{j})\}} is said to be tied if and only if x i = x j {\displaystyle x_{i}=x_{j}} or y i = y j {\displaystyle y_{i}=y_{j}} ; a tied pair is neither concordant nor discordant. When tied pairs arise in the data, the coefficient may be modified in a number of ways to keep it in the range [−1, 1]:

Tau-a

The Tau-a statistic tests the strength of association of the cross tabulations. Both variables have to be ordinal. Tau-a will not make any adjustment for ties. It is defined as:

τ A = n c n d n 0 {\displaystyle \tau _{A}={\frac {n_{c}-n_{d}}{n_{0}}}}

where nc, nd and n0 are defined as in the next section.

Tau-b

The Tau-b statistic, unlike Tau-a, makes adjustments for ties.[12] Values of Tau-b range from −1 (100% negative association, or perfect inversion) to +1 (100% positive association, or perfect agreement). A value of zero indicates the absence of association.

The Kendall Tau-b coefficient is defined as:

τ B = n c n d ( n 0 n 1 ) ( n 0 n 2 ) {\displaystyle \tau _{B}={\frac {n_{c}-n_{d}}{\sqrt {(n_{0}-n_{1})(n_{0}-n_{2})}}}}

where

n 0 = n ( n 1 ) / 2 n 1 = i t i ( t i 1 ) / 2 n 2 = j u j ( u j 1 ) / 2 n c = Number of concordant pairs n d = Number of discordant pairs t i = Number of tied values in the  i th  group of ties for the first quantity u j = Number of tied values in the  j th  group of ties for the second quantity {\displaystyle {\begin{aligned}n_{0}&=n(n-1)/2\\n_{1}&=\sum _{i}t_{i}(t_{i}-1)/2\\n_{2}&=\sum _{j}u_{j}(u_{j}-1)/2\\n_{c}&={\text{Number of concordant pairs}}\\n_{d}&={\text{Number of discordant pairs}}\\t_{i}&={\text{Number of tied values in the }}i^{\text{th}}{\text{ group of ties for the first quantity}}\\u_{j}&={\text{Number of tied values in the }}j^{\text{th}}{\text{ group of ties for the second quantity}}\end{aligned}}}

A simple algorithm developed in BASIC computes Tau-b coefficient using an alternative formula.[13]

Be aware that some statistical packages, e.g. SPSS, use alternative formulas for computational efficiency, with double the 'usual' number of concordant and discordant pairs.[14]

Tau-c

Tau-c (also called Stuart-Kendall Tau-c)[15] is more suitable than Tau-b for the analysis of data based on non-square (i.e. rectangular) contingency tables.[15][16] So use Tau-b if the underlying scale of both variables has the same number of possible values (before ranking) and Tau-c if they differ. For instance, one variable might be scored on a 5-point scale (very good, good, average, bad, very bad), whereas the other might be based on a finer 10-point scale.

The Kendall Tau-c coefficient is defined as:[16]

τ C = 2 ( n c n d ) n 2 ( m 1 ) m = τ A n 1 n m m 1 {\displaystyle \tau _{C}={\frac {2(n_{c}-n_{d})}{n^{2}{\frac {(m-1)}{m}}}}=\tau _{A}{\frac {n-1}{n}}{\frac {m}{m-1}}}

where

n c = Number of concordant pairs n d = Number of discordant pairs r = Number of rows c = Number of columns m = min ( r , c ) {\displaystyle {\begin{aligned}n_{c}&={\text{Number of concordant pairs}}\\n_{d}&={\text{Number of discordant pairs}}\\r&={\text{Number of rows}}\\c&={\text{Number of columns}}\\m&=\min(r,c)\end{aligned}}}

Significance tests

When two quantities are statistically dependent, the distribution of τ {\displaystyle \tau } is not easily characterizable in terms of known distributions. However, for τ A {\displaystyle \tau _{A}} the following statistic, z A {\displaystyle z_{A}} , is approximately distributed as a standard normal when the variables are statistically independent:

z A = n c n d 1 18 v 0 {\displaystyle z_{A}={n_{c}-n_{d} \over {\sqrt {{\frac {1}{18}}v_{0}}}}}

where v 0 = n ( n 1 ) ( 2 n + 5 ) {\displaystyle v_{0}=n(n-1)(2n+5)} .

Thus, to test whether two variables are statistically dependent, one computes z A {\displaystyle z_{A}} , and finds the cumulative probability for a standard normal distribution at | z A | {\displaystyle -|z_{A}|} . For a 2-tailed test, multiply that number by two to obtain the p-value. If the p-value is below a given significance level, one rejects the null hypothesis (at that significance level) that the quantities are statistically independent.

Numerous adjustments should be added to z A {\displaystyle z_{A}} when accounting for ties. The following statistic, z B {\displaystyle z_{B}} , has the same distribution as the τ B {\displaystyle \tau _{B}} distribution, and is again approximately equal to a standard normal distribution when the quantities are statistically independent:

z B = n c n d v {\displaystyle z_{B}={n_{c}-n_{d} \over {\sqrt {v}}}}

where

v = 1 18 v 0 ( v t + v u ) / 18 + ( v 1 + v 2 ) v 0 = n ( n 1 ) ( 2 n + 5 ) v t = i t i ( t i 1 ) ( 2 t i + 5 ) v u = j u j ( u j 1 ) ( 2 u j + 5 ) v 1 = i t i ( t i 1 ) j u j ( u j 1 ) / ( 2 n ( n 1 ) ) v 2 = i t i ( t i 1 ) ( t i 2 ) j u j ( u j 1 ) ( u j 2 ) / ( 9 n ( n 1 ) ( n 2 ) ) {\displaystyle {\begin{array}{ccl}v&=&{\frac {1}{18}}v_{0}-(v_{t}+v_{u})/18+(v_{1}+v_{2})\\v_{0}&=&n(n-1)(2n+5)\\v_{t}&=&\sum _{i}t_{i}(t_{i}-1)(2t_{i}+5)\\v_{u}&=&\sum _{j}u_{j}(u_{j}-1)(2u_{j}+5)\\v_{1}&=&\sum _{i}t_{i}(t_{i}-1)\sum _{j}u_{j}(u_{j}-1)/(2n(n-1))\\v_{2}&=&\sum _{i}t_{i}(t_{i}-1)(t_{i}-2)\sum _{j}u_{j}(u_{j}-1)(u_{j}-2)/(9n(n-1)(n-2))\end{array}}}

This is sometimes referred to as the Mann-Kendall test.[6]

Algorithms

The direct computation of the numerator n c n d {\displaystyle n_{c}-n_{d}} , involves two nested iterations, as characterized by the following pseudocode:

numer := 0
for i := 2..N do
    for j := 1..(i − 1) do
        numer := numer + sign(x[i] − x[j]) × sign(y[i] − y[j])
return numer

Although quick to implement, this algorithm is O ( n 2 ) {\displaystyle O(n^{2})} in complexity and becomes very slow on large samples. A more sophisticated algorithm[17] built upon the Merge Sort algorithm can be used to compute the numerator in O ( n log n ) {\displaystyle O(n\cdot \log {n})} time.

Begin by ordering your data points sorting by the first quantity, x {\displaystyle x} , and secondarily (among ties in x {\displaystyle x} ) by the second quantity, y {\displaystyle y} . With this initial ordering, y {\displaystyle y} is not sorted, and the core of the algorithm consists of computing how many steps a Bubble Sort would take to sort this initial y {\displaystyle y} . An enhanced Merge Sort algorithm, with O ( n log n ) {\displaystyle O(n\log n)} complexity, can be applied to compute the number of swaps, S ( y ) {\displaystyle S(y)} , that would be required by a Bubble Sort to sort y i {\displaystyle y_{i}} . Then the numerator for τ {\displaystyle \tau } is computed as:

n c n d = n 0 n 1 n 2 + n 3 2 S ( y ) , {\displaystyle n_{c}-n_{d}=n_{0}-n_{1}-n_{2}+n_{3}-2S(y),}

where n 3 {\displaystyle n_{3}} is computed like n 1 {\displaystyle n_{1}} and n 2 {\displaystyle n_{2}} , but with respect to the joint ties in x {\displaystyle x} and y {\displaystyle y} .

A Merge Sort partitions the data to be sorted, y {\displaystyle y} into two roughly equal halves, y l e f t {\displaystyle y_{\mathrm {left} }} and y r i g h t {\displaystyle y_{\mathrm {right} }} , then sorts each half recursive, and then merges the two sorted halves into a fully sorted vector. The number of Bubble Sort swaps is equal to:

S ( y ) = S ( y l e f t ) + S ( y r i g h t ) + M ( Y l e f t , Y r i g h t ) {\displaystyle S(y)=S(y_{\mathrm {left} })+S(y_{\mathrm {right} })+M(Y_{\mathrm {left} },Y_{\mathrm {right} })}

where Y l e f t {\displaystyle Y_{\mathrm {left} }} and Y r i g h t {\displaystyle Y_{\mathrm {right} }} are the sorted versions of y l e f t {\displaystyle y_{\mathrm {left} }} and y r i g h t {\displaystyle y_{\mathrm {right} }} , and M ( , ) {\displaystyle M(\cdot ,\cdot )} characterizes the Bubble Sort swap-equivalent for a merge operation. M ( , ) {\displaystyle M(\cdot ,\cdot )} is computed as depicted in the following pseudo-code:

function M(L[1..n], R[1..m]) is
    i := 1
    j := 1
    nSwaps := 0
    while i ≤ n and j ≤ m do
        if R[j] < L[i] then
            nSwaps := nSwaps + n − i + 1
            j := j + 1
        else
            i := i + 1
    return nSwaps

A side effect of the above steps is that you end up with both a sorted version of x {\displaystyle x} and a sorted version of y {\displaystyle y} . With these, the factors t i {\displaystyle t_{i}} and u j {\displaystyle u_{j}} used to compute τ B {\displaystyle \tau _{B}} are easily obtained in a single linear-time pass through the sorted arrays.

Software Implementations

  • R implements the test for τ B {\displaystyle \tau _{B}} cor.test(x, y, method = "kendall") in its "stats" package (also cor(x, y, method = "kendall") will work, but the latter does not return the p-value). All three versions of the coefficient are available in the "DescTools" package along with the confidence intervals: KendallTauA(x,y,conf.level=0.95) for τ A {\displaystyle \tau _{A}} , KendallTauB(x,y,conf.level=0.95) for τ B {\displaystyle \tau _{B}} , StuartTauC(x,y,conf.level=0.95) for τ C {\displaystyle \tau _{C}} .
  • For Python, the SciPy library implements the computation of τ B {\displaystyle \tau _{B}} in scipy.stats.kendalltau

See also

  • iconMathematics portal

References

  1. ^ Kendall, M. (1938). "A New Measure of Rank Correlation". Biometrika. 30 (1–2): 81–89. doi:10.1093/biomet/30.1-2.81. JSTOR 2332226.
  2. ^ Kruskal, W. H. (1958). "Ordinal Measures of Association". Journal of the American Statistical Association. 53 (284): 814–861. doi:10.2307/2281954. JSTOR 2281954. MR 0100941.
  3. ^ Nelsen, R.B. (2001) [1994], "Kendall tau metric", Encyclopedia of Mathematics, EMS Press
  4. ^ Prokhorov, A.V. (2001) [1994], "Kendall coefficient of rank correlation", Encyclopedia of Mathematics, EMS Press
  5. ^ Valz, Paul D.; McLeod, A. Ian (February 1990). "A Simplified Derivation of the Variance of Kendall's Rank Correlation Coefficient". The American Statistician. 44 (1): 39–40. doi:10.1080/00031305.1990.10475691. ISSN 0003-1305.
  6. ^ a b Valz, Paul D.; McLeod, A. Ian; Thompson, Mary E. (February 1995). "Cumulant Generating Function and Tail Probability Approximations for Kendall's Score with Tied Rankings". The Annals of Statistics. 23 (1): 144–160. doi:10.1214/aos/1176324460. ISSN 0090-5364.
  7. ^ Hoeffding, Wassily (1992), Kotz, Samuel; Johnson, Norman L. (eds.), "A Class of Statistics with Asymptotically Normal Distribution", Breakthroughs in Statistics: Foundations and Basic Theory, Springer Series in Statistics, New York, NY: Springer, pp. 308–334, doi:10.1007/978-1-4612-0919-5_20, ISBN 978-1-4612-0919-5, retrieved 2024-01-19
  8. ^ Kendall, M. G. (1949). "Rank and Product-Moment Correlation". Biometrika. 36 (1/2): 177–193. doi:10.2307/2332540. ISSN 0006-3444.
  9. ^ Richard Greiner, (1909), Ueber das Fehlersystem der Kollektiv-maßlehre, Zeitschrift für Mathematik und Physik, Band 57, B. G. Teubner, Leipzig, pages 121-158, 225-260, 337-373.
  10. ^ Moran, P. A. P. (1948). "Rank Correlation and Product-Moment Correlation". Biometrika. 35 (1/2): 203–206. doi:10.2307/2332641. ISSN 0006-3444.
  11. ^ Berger, Daniel (2016). "A Proof of Greiner's Equality". SSRN Electronic Journal. doi:10.2139/ssrn.2830471. ISSN 1556-5068.
  12. ^ Agresti, A. (2010). Analysis of Ordinal Categorical Data (Second ed.). New York: John Wiley & Sons. ISBN 978-0-470-08289-8.
  13. ^ Alfred Brophy (1986). "An algorithm and program for calculation of Kendall's rank correlation coefficient" (PDF). Behavior Research Methods, Instruments, & Computers. 18: 45–46. doi:10.3758/BF03200993. S2CID 62601552.
  14. ^ IBM (2016). IBM SPSS Statistics 24 Algorithms. IBM. p. 168. Retrieved 31 August 2017.
  15. ^ a b Berry, K. J.; Johnston, J. E.; Zahran, S.; Mielke, P. W. (2009). "Stuart's tau measure of effect size for ordinal variables: Some methodological considerations". Behavior Research Methods. 41 (4): 1144–1148. doi:10.3758/brm.41.4.1144. PMID 19897822.
  16. ^ a b Stuart, A. (1953). "The Estimation and Comparison of Strengths of Association in Contingency Tables". Biometrika. 40 (1–2): 105–110. doi:10.2307/2333101. JSTOR 2333101.
  17. ^ Knight, W. (1966). "A Computer Method for Calculating Kendall's Tau with Ungrouped Data". Journal of the American Statistical Association. 61 (314): 436–439. doi:10.2307/2282833. JSTOR 2282833.

Further reading

  • Abdi, H. (2007). "Kendall rank correlation" (PDF). In Salkind, N.J. (ed.). Encyclopedia of Measurement and Statistics. Thousand Oaks (CA): Sage.
  • Daniel, Wayne W. (1990). "Kendall's tau". Applied Nonparametric Statistics (2nd ed.). Boston: PWS-Kent. pp. 365–377. ISBN 978-0-534-91976-4.
  • Kendall, Maurice; Gibbons, Jean Dickinson (1990) [First published 1948]. Rank Correlation Methods. Charles Griffin Book Series (5th ed.). Oxford: Oxford University Press. ISBN 978-0195208375.
  • Bonett, Douglas G.; Wright, Thomas A. (2000). "Sample size requirements for estimating Pearson, Kendall, and Spearman correlations". Psychometrika. 65 (1): 23–28. doi:10.1007/BF02294183. S2CID 120558581.

External links

  • Tied rank calculation
  • Software for computing Kendall's tau on very large datasets
  • Online software: computes Kendall's tau rank correlation
  • v
  • t
  • e
Continuous data
Center
Dispersion
Shape
Count data
Summary tables
Dependence
Graphics
Study design
Survey methodology
Controlled experiments
Adaptive designs
Observational studies
Statistical theory
Frequentist inference
Point estimation
Interval estimation
Testing hypotheses
Parametric tests
Specific tests
  • Z-test (normal)
  • Student's t-test
  • F-test
Goodness of fit
Rank statistics
Bayesian inference
Correlation
Regression analysis
Linear regression
Non-standard predictors
Generalized linear model
Partition of variance
Categorical
Multivariate
Time-series
General
Specific tests
Time domain
Frequency domain
Survival
Survival function
Hazard function
Test
Biostatistics
Engineering statistics
Social statistics
Spatial statistics
  • Category
  • icon Mathematics portal
  • Commons
  • WikiProject