* Welch’s t-test* is probably the most commonly used hypothesis test for testing whether two populations have the same mean. Welch’s

*t*-test is generally preferred over Student’s two-sample

*t*-test: while both assume that the population of the two groups are normal, Student’s

*t*-test assumes that the two populations have the same variance while Welch’s

*t*-test does not make any assumption on the variances.

Assume we have samples from group 1 and samples from group 2. For , let and denote the sample mean and sample variance of group respectively. * Welch’s t-statistic* is defined by

Under the null hypothesis, is approximately distributed as the *t*-distribution with degrees of freedom

The equation above is known as the * Welch-Satterthwaite equation*.

**Welch-Satterthwaite equation**

Where does the Welch-Satterthwaite equation come from? The commonly cited reference for this is F. E. Satterthwaite’s 1946 paper (Reference 1). Satterthwaite tackles a more general problem: let be independent sample variances each having degrees of freedom. Consider the combined estimate

where are constants. (Back then, this was called a *complex estimate of variance*.) The exact distribution of is too difficult to compute (no simple closed form? I’m not sure), so * we approximate it with a chi-squared distribution that has the same variance as *. The question is,

**what is the degrees of freedom for this approximating chi-squared distribution?**The paper states that the number of degrees of freedom is

In practice we don’t know the value of the expectations, so we replace them with the observed values:

The degrees of freedom for Welch’s *t*-test is this formula above with , and .

**Heuristic argument**

Satterthwaite’s 1946 paper is the commonly cited reference, but that paper actually contains just the formulas and not the heuristic argument. For that, we have to go to his 1941 paper (Reference 2).

Consider first the simple case where we have just two independent sample variances and , each with degrees of freedom and . Let’s compute the degrees of freedom for the chi-squared distribution that approximates .

For , let . From our set-up, we have , thus

where the last equality holds because and are independent. On the other hand, to approximate be a chi-squared distribution with degrees of freedom means that . Under this approximation,

* For this approximation to be good, we want the variance obtained under the chi-squared approximation to be the same as the true variance.* Hence, we have

The argument above is perfectly general: we can rerun it to get the effective degrees of freedom for .

References:

- Satterthwaite, F. E. (1946). An approximate distribution of estimates of variance components.
- Satterthwaite, F. E. (1941). Synthesis of variance.