# Inverse transform sampling for truncated distributions

Assume that you have a random number generator which gives you $U_1, U_2, \dots \stackrel{iid}{\sim} U[0,1]$ (i.e. uniform distribution on the interval $[0,1]$) and a cumulative distribution function $F$. Assume for simplicity that $F$ is strictly increasing, so that $F^{-1}$ is well-defined as a functional inverse. Inverse transform sampling allows us to use this set-up to draw samples from $F$: If $U \sim U[0,1]$ then $F^{-1}(U) \sim F$. Thus, we can take $F^{-1}(U_1),F^{-1}(U_2), \dots$ to be our sample.

(If $F$ is not strictly increasing, we can take $F^{-1}(u) = \inf \{ x | F(x) \geq u\}$.)

Did you know that we can modify this slightly to draw samples $X \sim F$, conditional on $X \geq t$, where $t \in \mathbb{R}$? Instead of taking $F^{-1}(U)$ as the sample, take $F^{-1}[F(t) + U (1 - F(t))]$ instead.

Here is the proof: Let $G$ be the CDF of $X \sim F$ given $X \geq t$. Then $G(x) = 0$ if $x < t$, and for $x \geq t$, $G(x) = \dfrac{F(x) - F(t)}{1 - F(t)}$.

Now, for any $x$,

\begin{aligned} \mathbb{P}(F^{-1}[F(t) + U (1 - F(t))] \leq x) &= \mathbb{P}(F(t) + U (1 - F(t)) \leq F(x)) \\ &= \mathbb{P}\left( U \leq \frac{F(x) - F(t)}{1 - F(t)} \right) \\ &= \mathbb{P}\left( U \leq G(x) \right) \\ &= \mathbb{P}(G^{-1}(U) \leq x), \end{aligned}

which means that $(F^{-1}[F(t) + U (1 - F(t))]$ has the same distribution as $G^{-1}(U)$, i.e. $G$ (by an application of inverse transform sampling).

## 4 thoughts on “Inverse transform sampling for truncated distributions”

1. Hi! I’m also a second-year PhD student in Stanford, working mainly on physics and information theory. While the proof of inverse transform sampling is nice, I’m not sure I’ve ever had to draw samples from a CDF conditioned on the relevant r.v. being greater than some value – could you give an example of where this is useful?

Like