# General formula for the asymptotic covariance of the OLS estimator

This post derives the general formula for the covariance of the ordinary least squares (OLS) estimator.

Imagine we are in the regression setup with design matrix ${\bf X} \in \mathbb{R}^{n \times p}$ and response ${\bf y} \in \mathbb{R}^n$. Let ${\bf x}_i \in \mathbb{R}^p$ and $y_i \in \mathbb{R}$ denote the $i$th row of $\bf X$ and the $i$th element of $\bf y$ respectively. We can always make the following decomposition:

$y_i = \mathbb{E} [y_i \mid {\bf x_i}] + \varepsilon_i$,

where $\mathbb{E}[\varepsilon_i \mid {\bf x_i}] = 0$ and $\varepsilon$ is uncorrelated with any function of ${\bf x}_i$. (This is Theorem 3.1.1 of Reference 1.)

The population regression function approximates $y_i$ as ${\bf x}_i^T \beta$, where $\beta$ solves the minimization problem

$\beta = \text{argmin}_b \: \mathbb{E} \left[ (y_i - {\bf x}_i^T b)^2 \right].$

It can be shown that

$\beta = \mathbb{E}[{\bf x}_i {\bf x}_i^T]^{-1} \mathbb{E} [{\bf x}_i y_i].$

The ordinary least squares (OLS) estimator is a sample version of this and is given by

\begin{aligned} \hat\beta = ({\bf X}^T {\bf X})^{-1} {\bf X}^T {\bf y} = \left( \sum_i {\bf x}_i {\bf x}_i^T \right)^{-1} \left( \sum_i {\bf x}_i y_i \right). \end{aligned}

We are often interested in estimating the covariance matrix of $\hat\beta$ as it is needed to construct standard errors for $\hat\beta$. Defining $\varepsilon_i = y_i - {\bf x}_i^T \beta$ as the $i$th residual, we can rewrite the above as

\begin{aligned} \hat{\beta} &= \beta + \left( \sum_i {\bf x}_i {\bf x}_i^T \right)^{-1} \left( \sum_i {\bf x}_i \varepsilon_i \right), \\ \sqrt{n}(\hat{\beta} - \beta) &= n \left( \sum_i {\bf x}_i {\bf x}_i^T \right)^{-1} \cdot \dfrac{1}{\sqrt{n}} \left( \sum_i {\bf x}_i \varepsilon_i \right). \end{aligned}

By Slutsky’s Theorem, the quantity above has the same asymptotic distribution as $\mathbb{E}[{\bf x}_i {\bf x}_i^T]^{-1} \cdot \frac{1}{\sqrt{n}} \left( \sum_i {\bf x}_i \varepsilon_i \right)$. Since $\mathbb{E}[{\bf x}_i \varepsilon_i] = {\bf 0}$*, the Central Limit Theorem tells us that $\frac{1}{\sqrt{n}} \left( \sum_i {\bf x}_i \varepsilon_i \right)$ is asymptotically normally distributed with mean zero and covariance $\mathbb{E}[{\bf x}_i {\bf x}_i^T \varepsilon_i^2]$ (the matrix of fourth moments). Thus,

\begin{aligned} \sqrt{n}(\hat{\beta} - \beta) \stackrel{d}{\rightarrow} \mathcal{N} \left( {\bf 0}, \mathbb{E}[{\bf x}_i {\bf x}_i^T]^{-1} \mathbb{E}[{\bf x}_i {\bf x}_i^T \varepsilon_i^2] \mathbb{E}[{\bf x}_i {\bf x}_i^T]^{-1} \right). \end{aligned}

We can use the diagonal elements of $\mathbb{E}[{\bf x}_i {\bf x}_i^T]^{-1} \mathbb{E}[{\bf x}_i {\bf x}_i^T \varepsilon_i^2] \mathbb{E}[{\bf x}_i {\bf x}_i^T]^{-1}$ to construct standard errors of $\hat\beta$. The standard errors computed in this way are called heteroskedasticity-consistent standard errors (or White standard errors, or Eicker-White standard errors). They are “robust” in the sense that they use few assumptions on the data and the model: only those needed to make the Central Limit Theorem go through.

*Note: We do NOT need to assume that $\mathbb{E} [y_i \mid {\bf x_i}]$ is linear in order to conclude that $\mathbb{E}[{\bf x}_i \varepsilon_i] = {\bf 0}$. All we need are the relations $\beta = \mathbb{E}[{\bf x}_i {\bf x}_i^T]^{-1} \mathbb{E} [{\bf x}_i y_i]$ and $\varepsilon_i = y_i - {\bf x}_i^T \beta$. The derivation is as follows:

\begin{aligned} \mathbb{E}[{\bf x}_i \varepsilon_i] &= \mathbb{E}[{\bf x}_i (y_i - {\bf x}_i^T \beta)] \\ &= \mathbb{E}[{\bf x}_i y_i] - \mathbb{E}[ {\bf x}_i {\bf x}_i^T] \beta \\ &= \mathbb{E}[{\bf x}_i y_i] - \mathbb{E}[ {\bf x}_i {\bf x}_i^T] \mathbb{E}[{\bf x}_i {\bf x}_i^T]^{-1} \mathbb{E} [{\bf x}_i y_i] \\ &= {\bf 0}. \end{aligned}

Special case of homoskedastic errors

If we assume that the errors are homoskedastic, i.e.

$\mathbb{E}[\varepsilon_i \mid X_i] = \sigma^2$ for all $i$

for some constant $\sigma$, then the asymptotic covariance simplifies a little:

\begin{aligned} \sqrt{n}(\hat{\beta} - \beta) \stackrel{d}{\rightarrow} \mathcal{N} \left( {\bf 0}, \sigma^2 \mathbb{E}[{\bf x}_i {\bf x}_i^T]^{-1} \right). \end{aligned}

References:

1. Angrist, J. D., and Pischke, J.-S. (2009). Mostly harmless econometrics (Section 3.1.3).

# Generating correlation matrix for AR(1) model

Assume that we are in the time series data setting, where we have data at equally-spaced times $1, 2, \dots$ which we denote by random variables $X_1, X_2, \dots$. The AR(1) model, commonly used in econometrics, assumes that the correlation between $X_i$ and $X_j$ is $\text{Cor}(X_i, X_j) = \rho^{|i-j|}$, where $\rho$ is some parameter that usually has to be estimated.

If we were writing out the full correlation matrix for $n$ consecutive data points $X_1, \dots, X_n$, it would look something like this:

$\begin{pmatrix} 1 & \rho & \rho^2 & \dots & \rho^{n-1} \\ \rho & 1 & \rho & \dots & \rho^{n-2} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ \rho^{n-1} & \rho^{n-2} & \rho^{n-3} &\dots & 1 \end{pmatrix}$

(Side note: This is an example of a correlation matrix which has Toeplitz structure.)

Given $\rho$, how can we generate this matrix quickly in R? The function below is my (current) best attempt:

ar1_cor <- function(n, rho) {
exponent <- abs(matrix(1:n - 1, nrow = n, ncol = n, byrow = TRUE) -
(1:n - 1))
rho^exponent
}


In the function above, n is the number of rows in the desired correlation matrix (which is the same as the number of columns), and rho is the $\rho$ parameter. The function makes use of the fact that when subtracting a vector from a matrix, R automatically recycles the vector to have the same number of elements as the matrix, and it does so in a column-wise fashion.

Here is an example of how the function can be used:

ar1_cor(4, 0.9)
#       [,1] [,2] [,3]  [,4]
# [1,] 1.000 0.90 0.81 0.729
# [2,] 0.900 1.00 0.90 0.810
# [3,] 0.810 0.90 1.00 0.900
# [4,] 0.729 0.81 0.90 1.000


Such a function might be useful when trying to generate data that has such a correlation structure. For example, it could be passed as the Sigma parameter for MASS::mvrnorm(), which generates samples from a multivariate normal distribution.

Can you think of other ways to generate this matrix?

# Properties of covariance matrices

This post lists some properties of covariance matrices that I often forget.

A covariance matrix $\Sigma \in \mathbb{R}^{n \times n}$ is simply a matrix such that there exists some random vector $X \in \mathbb{R}^n$ such that $\sigma_{ij} = \text{Cov}(X_i, X_j)$ for all $i$ and $j$.

Properties:

1. $\Sigma$ is symmetric since $\text{Cov}(X_i, X_j) = \text{Cov}(X_j, X_i)$.
2. $\Sigma$ is positive semi-definite (PSD):
\begin{aligned} u^T \Sigma u &= \sum_{i, j=1}^n u_i \Sigma_{ij}u_j \\ &= \sum_{i, j=1}^n \text{Cov}(u_i X_i, u_j X_j) \\ &= \text{Cov}\left( \sum_i u_i X_i, \sum_j u_j X_j \right) \geq 0. \end{aligned}
3. Because $\Sigma$ is PSD, all of its eigenvalues are non-negative. (If $\Sigma$ was positive definite, then its eigenvalues would be positive.)
4. Since $\Sigma$ is real and symmetric, all of its eigenvalues are real, and there exists a real orthogonal matrix $Q$ such that $D = Q^T \Sigma Q$ is a diagonal matrix. (The entries along the diagonal of $D$ are $\Sigma$‘s eigenvalues.)
5. Since $\Sigma$‘s eigenvalues are all non-negative, $\Sigma$ has a square root: $\Sigma^{1/2} = QD^{1/2}$. (If $\Sigma$ is positive definite, then it has a negative square root as well:  $\Sigma^{-1/2} = QD^{-1/2}$.)

Note that the above apply to sample covariance matrices as well.

References:

1. Wikipedia. Covariance matrix.
2. Statistical Odds & Ends. Properties of real symmetric matrices.

# What do we mean by isotropic/anisotropic covariance?

Update (2019-10-24): I totally messed up the definitions the first time I posted this! I’ve fixed it now. Many thanks to commenter szcfweiya to pointing this out!

Let $\{ X_t \}_{t \in \mathbb{I}}$ be a stochastic process, where $\mathbb{I}$ is the index set for the stochastic process. (Most often we have $\mathbb{I} = [0, \infty)$ to index time or $\mathbb{I} = \mathbb{R}^d$ to index space). The stochastic process has an associated covariance function $K: \mathbb{I} \times \mathbb{I} \mapsto \mathbb{R}$ such that for any $s, t \in \mathbb{I}$, $\text{Cov}(X_s, X_t) = K(s, t)$.

In general, a covariance function must satisfy two properties:

1. It is symmetric, i.e. $K(x, x') = K(x', x)$ for all $x, x' \in \mathbb{I}$, and
2. It is positive semi-definite, i.e. for all $n \in \mathbb{N}$, $x_1, x_2, \dots, x_n \in \mathbb{I}$, $a_1, \dots, a_n \in \mathbb{R}$, \begin{aligned} \sum_{i=1}^n \sum_{j=1}^n a_i K(x_i, x_j) a_j \geq 0 \end{aligned}.

A covariance is isotropic if $K(x, x')$ depends only on the distance $\| x - x' \|$.  A covariance is said to be anisotropic if it is not isotropic. That is, $K(x, x')$ either does not depend on the distance $\| x - x' \|$, or it depends on $\| x - x' \|$ as well as some other functions of $x$ and $x'$.

Clearly the class of isotropic covariances is much smaller than that of anisotropic covariances. To model anisotropic covariances, one usually has to make an assumption on how $K(x, x')$ depends on $x$ and $x'$.

In my little googling around, anisotropic covariance modeling seems to be popular in geostatistics and more broadly, spatial statistics. One popular example of anisotropic covariance is called geometric anisotropy. In that setting, the index set of the stochastic process is $\mathbb{R}^n$ (typically with $n = 1$ or $n = 2$). Isotropic covariance in this setting would have the form $K(x, x') = \rho (d (x, x'))$, where $\rho$ is some function and $d$ is the Euclidean metric. (See this earlier post for some examples. Not all kernels there are isotropic but it should be obvious which are.) Geometric anisotropy refers to a covariance of the form $K(x, x') = \rho (d' (x, x'))$, where $d'$ is some other distance metric. Some examples (from Reference 1) are the Mahalanobis distance and the Minkowski distance.

A special case of using the Mahalanobis distance is where $d'(x, x') = \sqrt{(x-x')D(x-x')}$ for some diagonal matrix $D$. This corresponds to giving each axis a different scale before computing the Euclidean distance.

References:

1. Haskard, K. A. (2007) An anisotropic Matérn spatial covariance model: REML estimation and properties.

# Sampling paths from a Gaussian process

Gaussian processes are a widely employed statistical tool because of their flexibility and computational tractability. (For instance, one recent area where Gaussian processes are used is in machine learning for hyperparameter optimization.)

A stochastic process $\{ X_t \}_{t \in \mathbb{I}}$ is a Gaussian process if (and only if) any finite subcollection of random variables $(X_{t_1}, \dots, X_{t_n})$ has a multivariate Gaussian distribution. Here, $\mathbb{I}$ is the index set for the Gaussian process; most often we have $\mathbb{I} = [0, \infty)$ (to index time) or $\mathbb{I} = \mathbb{R}^d$ (to index space).

The stochastic nature of Gaussian processes also allows it to be thought of as a distribution over functions. One draw from a Gaussian process over corresponds to choosing a function $f: \mathbb{I} \mapsto \mathbb{R}$ according to some probability distribution over these functions.

Gaussian processes are defined by their mean and covariance functions. The covariance (or kernel) function $K: \mathbb{I} \times \mathbb{I} \mapsto \mathbb{R}$ is what characterizes the shapes of the functions which are drawn from the Gaussian process. In this post, we will demonstrate how the choice of covariance function affects the shape of functions it produces. For simplicity, we will assume $\mathbb{I} = \mathbb{R}$.

(Click on this link to see all code for this post in one script. For more technical details on the covariance functions, see this previous post.)

Overall set-up

Let’s say we have a zero-centered Gaussian process denoted by $GP(m(\cdot), K(\cdot, \cdot))$, and that $f$ is a function drawn from this Gaussian process. For a vector $(x_1, \dots, x_n)$, the function values $(f(x_1), \dots, f(x_n))$ must have a multivariate Gaussian distribution with mean $(m(x_1), \dots, m(x_n))$ and covariance matrix $\Sigma$ with entries $\Sigma_{ij} = K(x_i, x_j)$. We make use of this property to draw this function: we select a fine grid of x-coordinates, use mvrnorm() from the MASS package to draw the function values at these points, then connect them with straight lines.

Assume that we have already written an R function kernel_fn for the kernel. The first function below generates a covariance matrix from this kernel, while the second takes N draws from this kernel (using the first function as a subroutine):

library(MASS)

# generate covariance matrix for points in x using given kernel function
cov_matrix <- function(x, kernel_fn, ...) {
outer(x, x, function(a, b) kernel_fn(a, b, ...))
}

# given x coordinates, take N draws from kernel function at those points
draw_samples <- function(x, N, seed = 1, kernel_fn, ...) {
Y <- matrix(NA, nrow = length(x), ncol = N)
set.seed(seed)
for (n in 1:N) {
K <- cov_matrix(x, kernel_fn, ...)
Y[, n] <- mvrnorm(1, mu = rep(0, times = length(x)), Sigma = K)
}
Y
}


The ... argument for the draw_samples() function allows us to pass arguments into the kernel function kernel_fn.

We will use the following parameters for the rest of the post:

x <- seq(0, 2, length.out = 201)  # x-coordinates
N <- 3  # no. of draws
col_list <- c("red", "blue", "black")  # for line colors


Squared exponential (SE) kernel

The squared exponential (SE) kernel, also known as the radial basis function (RBF) kernel or the Gaussian kernel has the form

\begin{aligned} K(x, x') = \sigma^2 \exp \left[ -\frac{\| x - x' \|^2}{2l^2} \right], \end{aligned}

1 - rho
1 + (d-1)*rho


# Toeplitz covariance structure

When someone says that their model has Toeplitz covariance (or correlation) structure, what do they mean?

In linear algebra, a Toeplitz matrix is also known as a diagonal-constant matrix: i.e. “each descending diagonal from left to right is constant”:

$A = \begin{bmatrix} a_0 & a_{-1} & \dots & \dots & a_{-(n-1)} \\ a_1 & a_0 & a_{-1} &\ddots & \vdots \\ \vdots & \ddots & \ddots & \ddots & \vdots \\ \vdots & \ddots & a_1 & a_0 & a_{-1} \\ a_{n-1} & \dots & \dots & a_1 & a_0 \end{bmatrix}$.

A model is said to have Toeplitz covariance (correlation resp.) structure if the covariance  (correlation resp.) matrix is a Toeplitz matrix. Here are 2 places where we see such structures pop up:

• We have time series data at equally-spaced times $1, 2, \dots$, denoted by $X_1, X_2, \dots$. This model has Toeplitz covariance (correlation resp.) structure if for any $n$, the covariance (correlation resp.) matrix of $X_1, X_2, \dots, X_n$ is Toeplitz. The AR(1) model, commonly used in econometrics, is an example of this, since it has $\text{Cor}(X_i, X_j) = \rho^{|i-j|}$ for some $\rho$.
• We have a continuous-time stochastic process $\{ X_t \}$ which is weakly stationary, i.e. for any 2 times $t$ and $s$, $\text{Cov}(X_t, X_s)$ depends only on $t - s$. In this setting, for any equally-spaced times $t_1, \dots, t_n$, the covariance matrix of $X_{t_1}, \dots, X_{t_n}$ will be Toeplitz.

Why work with Toeplitz covariance structure? Other than the fact that they arise naturally in certain situations (like the two above), operations with Toeplitz matrices are fast, and a matrix inverse always exists.

References:

1. Toeplitz matrix, Wikipedia.
2. Guidelines for Selecting the Covariance Structure in Mixed Model Analysis, Chuck Kincaid.
3. Toeplitz Covariance Matrix Estimation for Adaptive Beamforming and Ultrasound Imaging, Michael Pettersson.