Strategies for proving that an estimator is consistent

When starting out in theoretical statistics, it’s often difficult to know how to even get started on the problem at hand. It helps to have a few general strategies in mind. In this post, I list some strategies you can use to prove that an estimator \hat{\theta}_n is consistent for a parameter \theta.

Note:

  1. This list is not exhaustive. If you have a strategy for proving consistency that is not in the list, feel free to share it!)
  2. The arguments here are not necessarily rigorous, so you need to check the conditions under which they apply.

Now for the strategies:

  1. Use the (weak) law of large numbers. This is especially applicable for i.i.d. sums.
  2. Use Chebyshev’s inequality: If \hat\theta_n is unbiased, then \mathbb{P} \{ |\hat\theta_n - \theta | > \epsilon \} \leq \dfrac{Var(\hat\theta_n)}{\epsilon^2}. Thus, the estimator will be consistent if Var(\hat\theta_n) \rightarrow 0.
  3. Actually in strategy 2, we only need \hat\theta_n to be asymptotically unbiased (i.e. \mathbb{E}[\hat\theta_n] \rightarrow \theta) and the result would still hold.
  4. If your estimator is the maximum likelihood estimator (MLE), then it is consistent under some regularity conditions (see here).
  5. If your estimator is an M-estimator or a Z-estimator, try to use the argmax consistency theorem (e.g. see slides 7 and 9 here).
  6. Try to use the continuous mapping theorem: if X_n \stackrel{P}{\rightarrow} X, then g(X_n) \stackrel{P}{\rightarrow} g(X) for any continuous function g.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s