# Lagrange dual, weak duality and strong duality

Consider an optimization problem in standard form: \begin{aligned} \text{minimize} \quad& f_0(x) \\ \text{subject to} \quad& f_i(x) \leq 0, \quad i = 1, \dots, m, \\ & h_i(x) = 0, \quad i = 1, \dots, p, \end{aligned}

with the variable $x \in \mathbb{R}^n$. Let $\mathcal{D}$ be the domain for $x$, i.e. the intersection of the domains of the $f_i$‘s and the $h_i$‘s. Let $p^\star$ denote the optimal value of the problem.

The Lagrange dual function is the function $g: \mathbb{R}^m \times \mathbb{R}^p \mapsto \mathbb{R}$ defined as the minimum value of the Lagrangian over $x$: \begin{aligned} g(\lambda, \nu) &= \inf_{x \in \mathcal{D}} L(x, \lambda, \nu) \\ &= \inf_{x \in \mathcal{D}} \left\{ f_0(x) + \sum_{i=1}^m \lambda_i f_i(x) + \sum_{i=1}^p \nu_i h_i(x) \right\}. \end{aligned}

The Lagrange dual problem is the optimization problem \begin{aligned} \text{maximize} \quad& g(\lambda, \nu) \\ \text{subject to} \quad& \lambda \geq 0, \end{aligned}

where the inequality above is understood to be element-wise. Let $d^\star$ denote the optimal value of this problem.

The Lagrange dual problem is always a convex optimization problem, because the objective being maximized is concave (see this previous post for a proof) and the constraint is convex. One reason the dual problem is of critical importance is that it provides a lower bound on $p^\star$ (it is the best lower bound that can be obtained from the Lagrange dual function) and in some cases, gives the value of $p^\star$.

Weak duality

Weak duality refers to the fact that we always have $d^\star \leq p^\star.$

This inequality holds all the time, even when the original problem is not convex. The proof is straightforward. If $\tilde{x}$ is feasible for the original problem, then for any $\lambda \geq 0$ and any $\nu$ we have \begin{aligned} f_0(\tilde{x}) &\geq f_0(\tilde{x}) + \sum_{i=1}^m \lambda_i f_i(\tilde{x}) + \sum_{i=1}^p \nu_i h_i(\tilde{x}) \\ &\geq \inf_{x \in \mathcal{D}} \left\{ f_0(x) + \sum_{i=1}^m \lambda_i f_i(x) + \sum_{i=1}^p \nu_i h_i(x) \right\} \\ &= g(\lambda, \nu). \end{aligned}

Since the above holds for any $\lambda \geq 0$ and any $\nu$, maximizing over these variables gives us $f_0(\tilde{x}) \geq d^\star$. Since this holds for all feasible $\tilde{x}$, minimizing over all feasible $\tilde{x}$ gives us $p^\star \geq d^\star$.

The difference $p^\star - d^\star$ is known as the optimal duality gap.

Strong duality

Strong duality refers to the property that $d^\star = p^\star$. This does not always hold, but it can hold for important classes of problems. Conditions which guarantee strong duality are known as constraint qualifications.

A convex optimization problem is a problem of the form \begin{aligned} \text{minimize} \quad& f_0(x) \\ \text{subject to} \quad& f_i(x) \leq 0, \quad i = 1, \dots, m, \\ & Ax = b, \end{aligned}

with $f_0, \dots, f_m$ being convex functions. Convex optimization problems don’t always have strong duality, but there are some simple constraint qualifications under which strong duality holds. The most commonly cited one is Slater’s condition, which seems fairly easy to satisfy.

Slater’s condition states that the problem admits a strictly feasible point, that is, a point $x$ in the relative interior of $\mathcal{D}$ (the domain of the problem) such that \begin{aligned} f_i(x) < 0, \quad i = 1, \dots, m, \quad \text{and} \quad Ax = b. \end{aligned}

Slater’s theorem says that strongly duality holds if Slater’s condition holds and the problem is convex. A proof of this can be found in Section 5.3.2 of Reference 1; see Section 2.1.3 of the same reference for more details on the concept of relative interior.

References:

1. Boyd, S., and Vandenberghe, L. (2004). Convex Optimization. Chapter 5.