When starting out in theoretical statistics, it’s often difficult to know how to even get started on the problem at hand. It helps to have a few general strategies in mind. In this post, I list some strategies you can use to prove that an estimator is consistent for a parameter .
- This list is not exhaustive. If you have a strategy for proving consistency that is not in the list, feel free to share it!)
- The arguments here are not necessarily rigorous, so you need to check the conditions under which they apply.
Now for the strategies:
- Use the (weak) law of large numbers. This is especially applicable for i.i.d. sums.
- Use Chebyshev’s inequality: If is unbiased, then . Thus, the estimator will be consistent if .
- Actually in strategy 2, we only need to be asymptotically unbiased (i.e. ) and the result would still hold.
- If your estimator is the maximum likelihood estimator (MLE), then it is consistent under some regularity conditions (see here).
- If your estimator is an M-estimator or a Z-estimator, try to use the argmax consistency theorem (e.g. see slides 7 and 9 here).
- Try to use the continuous mapping theorem: if , then for any continuous function .