Consider an optimization problem in standard form:
with the variable . Let be the domain for , i.e. the intersection of the domains of the ‘s and the ‘s. Let denote the optimal value of the problem.
The Lagrange dual function is the function defined as the minimum value of the Lagrangian over :
The Lagrange dual problem is the optimization problem
where the inequality above is understood to be element-wise. Let denote the optimal value of this problem.
The Lagrange dual problem is always a convex optimization problem, because the objective being maximized is concave (see this previous post for a proof) and the constraint is convex. One reason the dual problem is of critical importance is that it provides a lower bound on (it is the best lower bound that can be obtained from the Lagrange dual function) and in some cases, gives the value of .
Weak duality refers to the fact that we always have
This inequality holds all the time, even when the original problem is not convex. The proof is straightforward. If is feasible for the original problem, then for any and any we have
Since the above holds for any and any , maximizing over these variables gives us . Since this holds for all feasible , minimizing over all feasible gives us .
The difference is known as the optimal duality gap.
Strong duality refers to the property that . This does not always hold, but it can hold for important classes of problems. Conditions which guarantee strong duality are known as constraint qualifications.
A convex optimization problem is a problem of the form
with being convex functions. Convex optimization problems don’t always have strong duality, but there are some simple constraint qualifications under which strong duality holds. The most commonly cited one is Slater’s condition, which seems fairly easy to satisfy.
Slater’s condition states that the problem admits a strictly feasible point, that is, a point in the relative interior of (the domain of the problem) such that
Slater’s theorem says that strongly duality holds if Slater’s condition holds and the problem is convex. A proof of this can be found in Section 5.3.2 of Reference 1; see Section 2.1.3 of the same reference for more details on the concept of relative interior.
- Boyd, S., and Vandenberghe, L. (2004). Convex Optimization. Chapter 5.
Pingback: What are the KKT conditions? | Statistical Odds & Ends