Consider the linear program of the form

where , , and . The Lagrangian for this problem is

where the domain of is (elements of must be non-negative).

In this previous post, we introduced the saddle point theorem for convex optimization. Here is that same saddle point theorem, but for LPs:

If is a saddle point for the Lagrangian , then solves the primal problem. Conversely, if is a finite solution to the primal problem, then there is a such that is a saddle point for .Theorem.

Finding the saddle point of an LP Lagrangian can be written as either of the two (equivalent) formulations:

**Primal-Dual Hybrid Gradient (PDHG) method**

The saddle point formulation of the LP allows us to use primal-dual methods to solve the problem. One such method is called the * Primal-Dual Hybrid Gradient (PDHG) method* introduced by Chambolle & Pock (2011) (Reference 1). PDHG solves the more general problem

where are finite-dimensional real vector spaces, is a continuous linear operator with induced norm , and are proper, convex, lower-semicontinuous functions.

The PDHG algorithm simply loops over the following 3 steps until convergence:

- (Dual step) .
- (Primal step) .
- (Over-relaxation) .

In the above, and are hyperparameters to be chosen by the user. (**Note:** Reference 1 doesn’t have proximal operators but has resolvent operators instead. See this post for why we can rewrite the algorithm with prox operators.) It is also possible to switch the order of the dual and primal steps, which gives rise to the following 2 steps until convergence (see Reference 2):

- .
- .

**PDHG for LPs**

Recall the LP problem is equivalent to

To map this problem onto the PDHG problem, we need , , , , . With this, the PDHG algorithm becomes

- .
- .

Note that for such that , we have . (See this post for the proof.) Hence, we can replace the prox operators above with Euclidean projection operators:

- .
- .

References:

- Chambolle, A., and Pock, T. (2011). A First-Order Primal-Dual Algorithm for Convex Problems with Applications to Imaging.
- Pock, T., and Chambolle, A. (2011). Diagonal preconditioning for first order primal-dual algorithms in convex optimization.

In this post, we derive the value of the proximal operator for the function where for some .

where is the Euclidean projection of onto . That is, the proximal operator is a translation followed by a Euclidean projection.

]]>

Consider a primal LP and its dual. There are 4 possibilities:Theorem (Strong duality of LPs).

1. Both primal and dual have no feasible solutions.

2. The primal is infeasible and the dual is unbounded.

3. The dual is infeasible and the primal is unbounded.

4. Both primal and dual have feasible solutions and their values are equal.

Examples of each possibility can be found in Table 4.2 of Reference 1. A proof of this theorem can be found in multiple places (e.g. Reference 2).

References:

- Duality in Linear Programming.
- Williamson, D. P., and Kircher, K. (2014). Lecture 8: Strong duality.

where are convex functions from to and are affine functions from to . The Lagrangian for this problem can be written as

where the domain of is . (Dual variables associated with inequalities must be non-negative, while there are no restrictions on dual variables associated with equalities.)

A point is called a * saddle point* if for all and we have

Another way of saying this is if we fix , then the Lagrangian is maximized at (red line) and if we fix , then the Lagrangian is minimized at (blue line).

**Saddle point theorem**

The * saddle point theorem* links optimality in the primal problem with the existence of a saddle point for the Lagrangian:

If is a saddle point for the Lagrangian , then solves the primal problem. Conversely, if is a solution to the primal problem at which Slater’s condition holds, then there is a such that is a saddle point for .Theorem.

A proof of this theorem can be found in both References 1 and 2. The first statement of the theorem provides motivation for an optimization algorithm: try to find a saddle point for the Lagrangian, because once we do, we have also found an optimal point for the primal problem.

**Minimax and maximin**

For any Lagrangian we have

that is, the minimax is always greater than the maximin. However, if has a saddle point , then

i.e. the minimax is less than or equal to the maximin. Thus, if has a saddle point, the minimax and maximin are equal:

In particular, note that the LHS of the above is equal to the maximum value of the dual problem, while the RHS is equal to the minimum value of the primal problem (see e.g. this). Thus, if has a saddle point, then we have strong duality.

(This is closely related to von Neumann’s minimax theorem, which gives a sufficient condition for the minimax to be equal to the maximin.)

References:

- Burke, J. V. Convex Optimization, Saddle Point Theory, and Lagrangian Duality.
- Tangkijwanichakul, T. (2020). Saddle Point Theorem.

Consider the primal LP

The Lagrangian is

where and there are no restrictions on . The Lagrange dual function is

The dual problem is maximizing the dual function subject to the constraints of the variables, i.e.

or equivalently

**Special case 1: Standard form LP**

Consider the primal LP in standard form:

To match notation with the previous section, we need , , , , where is the identity matrix. With these substitutions, the dual problem becomes

If we write where is the dual variable corresponding to the inequality , then the above becomes

can viewed as a slack variable, and so the dual LP is equivalent to

**Special case 2: Standard form LP (maximization)**

Consider the primal LP in standard form but where we want to maximize the objective function instead of minimizing it:

This is equivalent to minimizing the negative of the objective function subject to the same constraints:

Thus by replacing by in the previous section, the dual LP in this case is

or equivalently

(This is the version that you’ll see most often in other websites, and is the one currently on the Wikipedia page.)

]]>

For all ,Theorem (Moreau Decomposition).,

where is the

for and is theproximal operatorof .convex conjugate

Here is the proof: Let . Then,

The second equivalence is a result involving convex conjugates and subdifferentials (see this post for statement and proof) while the 4th equivalence is a property of the proximal operator (see the note at the end of this post).

**Extended Moreau Decomposition**

The extended version of Moreau’s Decomposition Theorem involves a scaling factor . It states that for all ,

*Proof:* Applying Moreau decomposition to the function gives

Using the definitions of the proximal operator and the convex conjugate,

References:

- Gu, Q. (2016). “SYS 6003: Optimization. Lecture 25.”

A * relation* on is a subset of . We use the notation to mean the set . You can think of as an operator that maps vectors to sets . (Along this line of thinking, functions are a special kind of relation where every vector is mapped to a set consisting of exactly one element.)

The * inverse relation* is defined as .

For a relation and some parameter , the * resolvent* of is defined as the relation

In other words,

**Connection with the proximal operator**

Let be some convex function. Recall that the * proximal operator* of is defined by

It turns out that * the resolvent of the subdifferential operator is the proximal operator*. Here is the proof: Let for some . By definition of the resolvent,

(* Note:* Setting , the chain of reasoning above gives , which is useful to know.)

References:

- Pilanci, M. (2022). “Monotone Operators.”

Let be some function. The * convex conjugate* of (also know as the

* Fenchel’s equality* is the following statement:

For any ,Theorem (Fenchel’s inequality).

The proof of Fenchel’s equality follows directly from the definition of the convex conjugate:

A direct application of Fenchel’s inequality shows that the conjugate of the conjugate, denoted , always satisfies

To see this: Fenchel’s inequality says for all . Taking a supremum over all , the inequality becomes

It turns out that if is closed and convex, then we actually have . The proof is a bit cumbersome and so I’m not providing it in this post. (For a proof, see Reference 1 or Slides 5.13-5.14 of Reference 2.)

**Subdifferentials and optimality**

A vector is a * subgradient* of at point if

for all .

The * subdifferential* of at point , denoted , is the set of all the subgradients of and point .

Convex conjugates appear in the following theorem concerning subdifferentials:

If is closed and convex, then for any ,Theorem.

*Proof (adapted from Slide 5.15 of Reference 2 and Reference 3):* First we show that . Using the definitions of subgradients and convex conjugates,

Thus, for any ,

which implies that .

To get the reverse implication , apply the same logic above with in place of and use the fact that for closed convex .

Next, we show that . Using the definitions of subgradients,

Taking supremum over all , the inequality becomes . Combining this with Fenchel’s inequality gives us equality .

Conversely, if , we have

References:

- StackExchange. “How to prove the conjugate of the conjugate function is itself?”
- Vandenberghe, L. (2022). “5. Conjugate functions.”
- StackExchange. “Proof about Conjugate and subgradient.”

Preliminary definitions

- In , a
is a set which can be described as , where and .**polyhedron** - is an
of a polyhedron if there do not exist such that .**extreme point** - is a
of a polyhedron if is non-zero and . Let denote the set of rays of .**ray** - is an
of a polyhedron if there do not exist linearly independent such that .**extreme ray**

It can be shown that for any polyhedron, the number of extreme points and extreme rays is finite (proof in Reference 1).

Minkowski’s representation theorem

* Minkowski’s representation theorem* essentially says that a polyhedron can be described by its extreme points and extreme rays.

If is non-empty and , thenTheorem:where and are the set of extreme points and extreme rays of respectively.

A proof of this theorem can be found in Reference 1. As a special case, if the polyhedron is bounded, then it does not have any extreme rays and so any point in the polyhedron can be described as a convex combination of its extreme points.

Minkowski representation & projection

The theorem tells us that a polyhedron can be expressed as the set of convex combination of its extreme points and rays. For notational convenience, let denote the extreme points and extreme rays of the polyhedron , and let if is an extreme point, and otherwise. The * Minkowski representation* of the polyhedron consists of variables , one for each extreme point/ray:

and the * Minkowski projection* defined by

You can think of the Minkowski representation as expressing each point in the polyhedron in terms of “coordinates” associated with the extreme points/rays.

Note that these coordinates are not unique: each point in the polyhedron may be associated with more than one set of coordinates. For example, consider the bounded triangle in with coordinates , and . The point can be written as

or

References:

- Nemhauser, G., and Wolsey, L. (1988). Integer and Combinatorial Optimization. (Chapter I.4).
- Tebboth, J. R. (2001). A Computational Study of Dantzig-Wolfe Decomposition. (Section 3.3).

Consider the general machine learning set-up. We have a class of models parameterized by (e.g. for linear regression, would be the coefficients of the model). Each of these models takes in some input and outputs a result . We want to select the parameter which minimizes some * population loss*:

where is the loss incurred for a single data point, and is the population distribution for our data. Unfortunately we don’t know and hence can’t evaluate exactly. Instead, we often have a dataset with which we can define the **empirical loss**

A viable approach is to select by minimizing the empirical loss: the hope here is that the empirical loss is a good approximation to the population loss.

For many modern ML models (especially overparameterized models), the loss functions are non-convex with multiple local and global minima. In this setup, it’s known that the parameter obtained by minimizing the empirical loss does not necessarily translate into small population loss. That is, good performance on the training set does not “generalize” well. In fact, many different can give similar values of but very different values of .

**“Flatness” and generalization performance**

One thing that is emerging from the literature is * a connection between the “flatness” of minima and generalization performance*. In particular, models corresponding to a minimum point whose loss function neighborhood is relatively “flat” tends to have better generalization performance. The intuition is as follows: think of the empirical loss as a random function, with the randomness coming from which data points are chosen from the sample. As we draw several of these empirical loss functions, minima associated with flat areas of the function tend to stay in the same area (and hence are “robust”), while minima associated with sharp areas move around a lot.

Here is a stylized example to illustrate the intuition. Imagine that the line in black is the population loss. There are 10 blue lines in the figure, each representing a possible empirical loss function. You can see that there is a lot of variation in the part of the loss associated with the sharper minimum on the left as compared to the flatter minimum on the right.

Imagine for each of the 10 empirical loss functions, we locate the value of the two minima, but record their value on the true population loss. The points in red correspond to the population loss for the sharper minimum while the points in blue correspond to that for the flatter minimum. We can see that the loss values for the blue points don’t fluctuate as much as that for the red points.

**Sharpness-aware minimization (SAM)**

They are many ways to define “flatness” or “sharpness”. * Sharpness-aware minimization (SAM)*, introduced by Foret et. al. (2020) (Reference 1), is one way to formalize the notion of sharpness and use it in model training. Instead of finding parameter values which minimize the empirical loss at a point:

find parameter values whose entire neighborhoods have uniformly small empirical loss:

where is a hyperparameter and is the Lp-norm, with typically chosen to be 2.

The figure below shows what looks like for different values of in our simple example. As increases, the value of increases a lot for the sharp minimum on the left but not very much for the flat minimum on the right.

In practice we don’t actually minimize , but minimize the SAM loss with an L2 regularization term. Also, can’t be computed analytically: the paper has details on how to get around it (via approximations and such). One final note is on what value should take. In the paper, is treated as a hyperparameter which needs to be tuned (e.g. with grid search).

References:

- Foret, P., et. al. (2020). Sharpness-Aware Minimization for Efficiently Improving Generalization.