What is the Durbin-Watson test?

The Durbin-Watson test, introduced by J. Durbin and G. S. Watson in 1950 (see reference 1) is used to test for autocorrelation in time series data. More accurately, the test assumes the following underlying model: our response y is a linear combination of the features in \mathbf{X}, i.e. y = \mathbf{X}\beta + \epsilon, and the errors come from a stationary Markov process

\begin{aligned} \epsilon_t = \rho \epsilon_{t-1} + u_t, \quad t = \dots, -1, 0, 1, \dots, \end{aligned}

where |\rho| < 1 and u_t \sim \mathcal{N}(0, \sigma^2), and is independent of \epsilon_{t-1}, \epsilon_{t-2}, \dots and u_{t-1}, u_{t-2}, \dots. (Sometimes this is worded as the errors coming from an AR(1) process.)

The null hypothesis is that H_0: \; \rho = 0. The alternative hypothesis is typically one-sided, and more often than not, the alternative is H_A: \; \rho > 0. This is because positive autocorrelation is much more common than negative autocorrelation in time series data.

Assume that we have data (x_t, y_t) for times t = 1, \dots, T. (Here, x_t refers to the tth row of the design matrix \mathbf{X}. The Durbin-Watson statistic is defined as

\begin{aligned} d = \dfrac{\sum_{t=2}^T (e_t - e_{t-1})^2}{\sum_{t=1}^T e_t^2}, \end{aligned}

where e_t denotes the tth residual obtained from linear regression (i.e. run linear regression of y on \mathbf{X}, get predictions \hat{y}, then e_t = y_t - \hat{y}_t).

It can be proven fairly easily that we must have 0 \leq d \leq 4. If the null hypothesis is false (and there is positive autocorrelation in the data, i.e. \rho > 0), then we would expect the denominator of d to be small. Hence, we would reject the null hypothesis if d is close to 0. Conversely, if there is negative autocorrelation in the data, we would expect d to be large (close to 4).

If there is no autocorrelation, d \approx 2. The heuristic explanation for this is that when you expand the numerator, the cross-terms are roughly zero in expectation and so don’t contribute to the numerator.

The cutoffs for the Durbin-Watson test (i.e. thresholds for rejection) depend on the design matrix X, as well as the level of significance, the number of observations and the number of features. To avoid the dependence X, upper and lower bounds were established for these cutoffs instead.

References:

  1. Durbin, J., and Watson, G. S. Testing for Serial Correlation in Least Squares Regression: I.
  2. Durbin-Watson Significance Tables.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s