Dickey–Fuller test

Time series statistical test

In statistics, the Dickey–Fuller test tests the null hypothesis that a unit root is present in an autoregressive (AR) time series model. The alternative hypothesis is different depending on which version of the test is used, but is usually stationarity or trend-stationarity. The test is named after the statisticians David Dickey and Wayne Fuller, who developed it in 1979.[1]

Explanation

A simple AR model is

y t = ρ y t 1 + u t {\displaystyle y_{t}=\rho y_{t-1}+u_{t}\,}

where y t {\displaystyle y_{t}} is the variable of interest, t {\displaystyle t} is the time index, ρ {\displaystyle \rho } is a coefficient, and u t {\displaystyle u_{t}} is the error term (assumed to be white noise). A unit root is present if ρ = 1 {\displaystyle \rho =1} . The model would be non-stationary in this case.

The regression model can be written as

Δ y t = ( ρ 1 ) y t 1 + u t = δ y t 1 + u t {\displaystyle \Delta y_{t}=(\rho -1)y_{t-1}+u_{t}=\delta y_{t-1}+u_{t}\,}

where Δ {\displaystyle \Delta } is the first difference operator and δ ρ 1 {\displaystyle \delta \equiv \rho -1} . This model can be estimated, and testing for a unit root is equivalent to testing δ = 0 {\displaystyle \delta =0} . Since the test is done over the residual term rather than raw data, it is not possible to use standard t-distribution to provide critical values. Therefore, this statistic t {\displaystyle t} has a specific distribution simply known as the Dickey–Fuller table.

There are three main versions of the test:

1. Test for a unit root:

Δ y t = δ y t 1 + u t {\displaystyle \Delta y_{t}=\delta y_{t-1}+u_{t}\,}

2. Test for a unit root with constant:

Δ y t = a 0 + δ y t 1 + u t {\displaystyle \Delta y_{t}=a_{0}+\delta y_{t-1}+u_{t}\,}

3. Test for a unit root with constant and deterministic time trend:

Δ y t = a 0 + a 1 t + δ y t 1 + u t {\displaystyle \Delta y_{t}=a_{0}+a_{1}t+\delta y_{t-1}+u_{t}\,}

Each version of the test has its own critical value which depends on the size of the sample. In each case, the null hypothesis is that there is a unit root, δ = 0 {\displaystyle \delta =0} . The tests have low statistical power in that they often cannot distinguish between true unit-root processes ( δ = 0 {\displaystyle \delta =0} ) and near unit-root processes ( δ {\displaystyle \delta } is close to zero). This is called the "near observation equivalence" problem.

The intuition behind the test is as follows. If the series y {\displaystyle y} is stationary (or trend-stationary), then it has a tendency to return to a constant (or deterministically trending) mean. Therefore, large values will tend to be followed by smaller values (negative changes), and small values by larger values (positive changes). Accordingly, the level of the series will be a significant predictor of next period's change, and will have a negative coefficient. If, on the other hand, the series is integrated, then positive changes and negative changes will occur with probabilities that do not depend on the current level of the series; in a random walk, where you are now does not affect which way you will go next.

It is notable that

Δ y t = a 0 + u t {\displaystyle \Delta y_{t}=a_{0}+u_{t}\,}

may be rewritten as

y t = y 0 + i = 1 t u i + a 0 t {\displaystyle y_{t}=y_{0}+\sum _{i=1}^{t}u_{i}+a_{0}t}

with a deterministic trend coming from a 0 t {\displaystyle a_{0}t} and a stochastic intercept term coming from y 0 + i = 1 t u i {\displaystyle y_{0}+\sum _{i=1}^{t}u_{i}} , resulting in what is referred to as a stochastic trend.[2]

There is also an extension of the Dickey–Fuller (DF) test called the augmented Dickey–Fuller test (ADF), which removes all the structural effects (autocorrelation) in the time series and then tests using the same procedure.

Dealing with uncertainty about including the intercept and deterministic time trend terms

Which of the three main versions of the test should be used is not a minor issue. The decision is important for the size of the unit root test (the probability of rejecting the null hypothesis of a unit root when there is one) and the power of the unit root test (the probability of rejecting the null hypothesis of a unit root when there is not one). Inappropriate exclusion of the intercept or deterministic time trend term leads to bias in the coefficient estimate for δ, leading to the actual size for the unit root test not matching the reported one. If the time trend term is inappropriately excluded with the a 0 {\displaystyle a_{0}} term estimated, then the power of the unit root test can be substantially reduced as a trend may be captured through the random walk with drift model.[3] On the other hand, inappropriate inclusion of the intercept or time trend term reduces the power of the unit root test, and sometimes that reduced power can be substantial.

Use of prior knowledge about whether the intercept and deterministic time trend should be included is of course ideal but not always possible. When such prior knowledge is unavailable, various testing strategies (series of ordered tests) have been suggested, e.g. by Dolado, Jenkinson, and Sosvilla-Rivero (1990)[4] and by Enders (2004), often with the ADF extension to remove autocorrelation. Elder and Kennedy (2001) present a simple testing strategy that avoids double and triple testing for the unit root that can occur with other testing strategies, and discuss how to use prior knowledge about the existence or not of long-run growth (or shrinkage) in y.[5] Hacker and Hatemi-J (2010) provide simulation results on these matters,[6] including simulations covering the Enders (2004) and Elder and Kennedy (2001) unit-root testing strategies. Simulation results are presented in Hacker (2010) which indicate that using an information criterion such as the Schwarz information criterion may be useful in determining unit root and trend status within a Dickey–Fuller framework.[7]

See also

References

  1. ^ Dickey, D. A.; Fuller, W. A. (1979). "Distribution of the Estimators for Autoregressive Time Series with a Unit Root". Journal of the American Statistical Association. 74 (366): 427–431. doi:10.1080/01621459.1979.10482531. JSTOR 2286348.
  2. ^ Enders, W. (2004). Applied Econometric Time Series (Second ed.). Hoboken: John Wiley & Sons. ISBN 978-0-471-23065-6.
  3. ^ Campbell, J. Y.; Perron, P. (1991). "Pitfalls and Opportunities: What Macroeconomists Should Know about Unit Roots" (PDF). NBER Macroeconomics Annual. 6 (1): 141–201. doi:10.2307/3585053. JSTOR 3585053.
  4. ^ Dolado, J. J.; Jenkinson, T.; Sosvilla-Rivero, S. (1990). "Cointegration and Unit Roots". Journal of Economic Surveys. 4 (3): 249–273. doi:10.1111/j.1467-6419.1990.tb00088.x. hdl:10016/3321.
  5. ^ Elder, J.; Kennedy, P. E. (2001). "Testing for Unit Roots: What Should Students Be Taught?". Journal of Economic Education. 32 (2): 137–146. CiteSeerX 10.1.1.140.8811. doi:10.1080/00220480109595179. S2CID 18656808.
  6. ^ Hacker, R. S.; Hatemi-J, A. (2010). "The Properties of Procedures Dealing with Uncertainty about Intercept and Deterministic Trend in Unit Root Testing". CESIS Electronic Working Paper Series, Paper No. 214. Centre of Excellence for Science and Innovation Studies, The Royal Institute of Technology, Stockholm, Sweden.
  7. ^ Hacker, Scott (2010-02-11). "The Effectiveness of Information Criteria in Determining Unit Root and Trend Status". Working Paper Series in Economics and Institutions of Innovation. 213. Stockholm, Sweden: Royal Institute of Technology, CESIS - Centre of Excellence for Science and Innovation Studies.

Further reading

  • Enders, Walter (2010). Applied Econometric Time Series (Third ed.). New York: Wiley. pp. 206–215. ISBN 978-0470-50539-7.
  • Hatanaka, Michio (1996). Time-Series-Based Econometrics: Unit Roots and Cointegration. New York: Oxford University Press. pp. 48–49. ISBN 978-0-19-877353-5.

External links

  • Statistical tables for unit-root tests – Dickey–Fuller table
  • How to do a Dickey-Fuller Test Using Excel
  • v
  • t
  • e
Continuous data
Center
Dispersion
Shape
Count data
Summary tables
Dependence
Graphics
Study design
Survey methodology
Controlled experiments
Adaptive designs
Observational studies
Statistical theory
Frequentist inference
Point estimation
Interval estimation
Testing hypotheses
Parametric tests
Specific tests
  • Z-test (normal)
  • Student's t-test
  • F-test
Goodness of fit
Rank statistics
Bayesian inference
Correlation
Regression analysis
Linear regression
Non-standard predictors
Generalized linear model
Partition of variance
Categorical
Multivariate
Time-series
General
Specific tests
Time domain
Frequency domain
Survival
Survival function
Hazard function
Test
Biostatistics
Engineering statistics
Social statistics
Spatial statistics
  • Category
  • icon Mathematics portal
  • Commons
  • WikiProject