Strategies for proving convergence in distribution

When starting out in theoretical statistics, it’s often difficult to know how to even get started on the problem at hand. It helps to have a few general strategies in mind. One thing you are often asked to do is to prove that an estimator $\hat{\theta}_n$ (appropriately scaled) converges in distribution, and you are often asked to determine that limiting distribution. In this post, I list some strategies you can use for that.

Note:

1. This list is not exhaustive. If you have a strategy for proving consistency that is not in the list, feel free to share it!)
2. The arguments here are not necessarily rigorous, so you need to check the conditions under which they apply.

Now for the strategies:

1. Use the central limit theorem (CLT), or some version of it. This is especially applicable for i.i.d. sums, but there are versions of the CLT that don’t require identical distribution (and there are some that don’t even require complete independence!).
2. Use the delta method.
3. If your estimator is the maximum likelihood estimator (MLE), then it is asymptotically normal under some regularity conditions (see here).
4. If your estimator is an M-estimator or a Z-estimator, then it is asymptotically normal under some regularity conditions (see here).
5. Work directly with the definition of convergence in distribution: show that the CDFs convergence pointwise to the limiting CDF (at all points of continuity for the limiting CDF).
6. Use Lévy’s continuity theorem: If the characteristic functions of the random variable sequence converge pointwise to the characteristic function of the limiting random variable, then the random variable sequence converges in distribution to the limiting random variable. (A similar theorem applies for moment generating functions, see here.)
7. Use the Portmanteau theorem.
8. Slutsky’s theorem will often come in handy. Often the thing we need to prove convergence in distribution for is a ratio $A_n / B_n$ with randomness in both numerator and denominator. Slutsky’s theorem allows us to deal with the two sources of randomness separately. If we can show something like $A_n / m_n \stackrel{d}{\rightarrow} P$ and $m_n / B_n \stackrel{P}{\rightarrow} c$ for some constant $c$ and deterministic sequence $\{ m_n \}$, then Slutsky’s theorem concludes that $A_n / B_n \stackrel{d}{\rightarrow} cP$. (When $m_n = m$ for all $n$, then showing $m_n / B_n \stackrel{P}{\rightarrow} c$ is the same as showing $B_n \stackrel{P}{\rightarrow} cm$.)