Overview

Part 11 - Linear Programming: Duality and Sensitivity Analysis

TMA947
Date: October 1, 2025
Last modified: October 1, 2025
10 min read
TMA947_11

Introduction

In this part we will cover duality for Linear Programs (LPs) and the corresponding weak & strong duality theorems. Further, we will also discuss sensitivity analysis and complementarity of (dual) LPs, which will give us an alternative view of the simplex method.a

Primal-Dual LP Problems

Definition: Primal LP Problem

Consider the primal LP problem on standard form, $$ \begin{align*} (P) \quad & \newline z^{\star} & = \begin{cases} \min \ & \mathbf{c}^T \mathbf{x} \newline \text{subject to} \ & A \mathbf{x} = \mathbf{b} \newline \ & \mathbf{x} \geq \mathbf{0} \end{cases} \end{align*} $$ where $z^{\star}$ is the optimal objective value.

Definition: Dual LP Problem

The dual LP problem corresponding to the primal LP problem $(P)$ is, $$ \begin{align*} (D) \quad & \newline q^{\star} & = \begin{cases} \max \ & \mathbf{b}^T \mathbf{y} \newline \text{subject to} \ & A^T \mathbf{y} \leq \mathbf{c} \end{cases} \end{align*} $$ where $q^{\star}$ is the optimal objective value.

Explanation: Derivation of the Dual Problem

We know that the Lagrangian for the primal LP problem is, $$ \begin{align*} \mathcal{L}(\mathbf{x}, \mathbf{y}) & = \mathbf{c}^T \mathbf{x} - \mathbf{y}^T (\mathbf{b} - A \mathbf{x}) \newline & = \mathbf{b}^T \mathbf{y} + \mathbf{x}^T (A^T \mathbf{y} - \mathbf{c}) \end{align*} $$ thus, the dual function $q(\mathbf{y})$ is, $$ \begin{align*} \begin{align*} q(\mathbf{y}) = & \inf \ \mathbf{b}^T \mathbf{y} + \mathbf{x}^T (A^T \mathbf{y} - \mathbf{c}) \newline & \text{subject to} \ \mathbf{x} \geq \mathbf{0} \newline \end{align*} \newline = \begin{cases} \mathbf{b}^T \mathbf{y}, & A^T \mathbf{y} - \mathbf{c} \leq \mathbf{0} \newline -\infty, & \text{otherwise} \end{cases} \end{align*} $$ Therefore, the dual problem is, $$ \begin{align*} \max \ & q(\mathbf{y}) \newline \text{subject to} \ & A^T \mathbf{y} \leq \mathbf{c} \end{align*} $$ which is exactly the dual LP problem defined above.

Remark

We note that the primal has $n$ variables and $m$ constraints, while the dual has $m$ variables and $n$ constraints.

Further, the dual of the dual is the primal, i.e., $(D)$ is the dual of $(P)$ and $(P)$ is the dual of $(D)$.

Proof

The Lagrangian for the dual LP problem is, $$ \begin{align*} \mathcal{L}(\mathbf{y}, \mathbf{x}) & = \mathbf{b}^T \mathbf{y} + \mathbf{x}^T (\mathbf{c} - A^T \mathbf{y}) \newline & = \mathbf{c}^T \mathbf{x} + \mathbf{y}^T (\mathbf{b} - A \mathbf{x}) \end{align*} $$ thus, the dual function $p(\mathbf{x})$ is, $$ \begin{align*} \begin{align*} p(\mathbf{x}) = & \sup \ \mathbf{c}^T \mathbf{x} + \mathbf{y}^T (\mathbf{b} - A \mathbf{x}) \newline & \text{subject to} \ \mathbf{y} \text{ free} \newline \end{align*} \newline = \begin{cases} \mathbf{c}^T \mathbf{x}, & A \mathbf{x} = \mathbf{b} \newline +\infty, & \text{otherwise} \end{cases} \end{align*} $$ Therefore, the dual problem is, $$ \begin{align*} \min \ & p(\mathbf{x}) \newline \text{subject to} \ & A \mathbf{x} = \mathbf{b} \newline \ & \mathbf{x} \geq \mathbf{0} \end{align*} $$ which is exactly the primal LP problem defined above, thus the dual of the dual is the primal.

Lastly, we can derive duals of LPs on non-standard form as well.

Intuition: Dual Problems on Non-Standard Form

From what we have seen, we can see some symmetries between the primal and dual LP problems. Using these symmetries, we can derive duals of LPs on non-standard form.

Primal (P)Dual (D)
$\min$/$\inf$$\max$/$\sup$
VariablesConstraints
$\geq 0$$\leq$
$\leq 0$$\geq$
free$=$
ConstraintsVariables
$=$free
$\geq$$\geq 0$
$\leq$$\leq 0$

Weak and Strong Duality

Theorem: Weak Duality Theorem

If $\mathbf{x}$ is a feasible solution to the primal LP problem $(P)$ and $\mathbf{y}$ is a feasible solution to the dual LP problem $(D)$, then, $$ \mathbf{c}^T \mathbf{x} \geq \mathbf{b}^T \mathbf{y} $$

Proof

Since $\mathbf{x}$ is feasible for $(P)$, we have that $A \mathbf{x} = \mathbf{b}$ and $\mathbf{x} \geq \mathbf{0}$.

Since $\mathbf{y}$ is feasible for $(D)$, we have that $A^T \mathbf{y} \leq \mathbf{c}$. Therefore, $$ \begin{align*} \mathbf{c}^T \mathbf{x} & \geq (\underbrace{A^T \mathbf{y}}_{\leq \mathbf{c}})^T \mathbf{x} \newline & = \mathbf{y}^T \underbrace{(A \mathbf{x})}_{= \mathbf{b}} \newline & = \mathbf{b}^T \mathbf{y} _\blacksquare \end{align*} $$

From this theorem, we have some corollaries.

Corollary
  1. If $z^{\star} = -\infty$ (the primal is unbounded), then the dual is infeasible.
  2. If $q^{\star} = +\infty$ (the dual is unbounded), then the primal is infeasible.
Proof
  1. Suppose that the primal is unbounded, i.e., $z^{\star} = -\infty$. If the dual is feasible, then there exists a feasible $\mathbf{y}$ such that $\mathbf{b}^T \mathbf{y}$ is finite. However, by the weak duality theorem, we have that $\mathbf{c}^T \mathbf{x} \geq \mathbf{b}^T \mathbf{y}$ for all feasible $\mathbf{x}$. This contradicts the assumption that the primal is unbounded, thus the dual must be infeasible.
  2. The proof is similar to the first part, by swapping the roles of the primal and dual.
  1. If $\mathbf{x}$ is feasible to $(P)$ and $\mathbf{y}$ is feasible to $(D)$ such that $\mathbf{c}^T \mathbf{x} = \mathbf{b}^T \mathbf{y}$, then $\mathbf{x}$ and $\mathbf{y}$ are optimal to $(P)$ and $(D)$, respectively.
Theorem: Strong Duality Theorem

If both $(P)$ and $(D)$ are both feasible, then there exists optimal $\mathbf{x}^{\star}$ to $(P)$ and optimal $\mathbf{y}^{\star}$ to $(D)$ such that, $$ \mathbf{c}^T \mathbf{x}^{\star} = \mathbf{b}^T \mathbf{y}^{\star} $$

Proof

Let $(P)$ be feasible, thus $z^{\star} > -\infty$, therefore, there exists an optimal BFS.

Let $\mathbf{x}^{\star}$ be an optimal BFS to $(P)$, for this BFS, we know that $\tilde{\mathbf{c}}_N^T \geq \mathbf{0}$, let $\mathbf{y}^{\star} = \mathbf{c}_B^T B^{-1}$.a $$ \begin{align*} \tilde{\mathbf{c}}_N^T & = \mathbf{c}_N^T - \mathbf{c}_B^T B^{-1} N \newline & = \mathbf{c}_N^T - \mathbf{y}^{\star T} N \newline & \geq \mathbf{0} \newline \Rightarrow & N^T \mathbf{y}^{\star} \leq \mathbf{c}_N \newline \end{align*} $$ Furthermore, $$ \begin{align*} \mathbf{c}_B^T - \mathbf{y}^{\star T} B & = \mathbf{c}_B^T - \mathbf{c}_B^T \underbrace{B^{-1} B}_{\mathbf{I}} \newline \mathbf{B}^T \mathbf{y}^{\star} & = \mathbf{c}_B \leq \mathbf{c}_B \newline \end{align*} $$ This means that, $$ \begin{align*} A^T \mathbf{y}^{\star} & = \begin{bmatrix} B^T \newline N^T \end{bmatrix} \mathbf{y}^{\star} = \begin{bmatrix} B^T \mathbf{y} \newline N^T \mathbf{y} \end{bmatrix} \leq \begin{bmatrix} \mathbf{c}_B \newline \mathbf{c}_N \end{bmatrix} = \mathbf{c} \newline \end{align*} $$ and thus $\mathbf{y}^{\star}$ is feasible for $(D)$.

Further, $$ \begin{align*} \mathbf{b}^T \mathbf{y}^{\star} & = \mathbf{y}^{\star T} \mathbf{b} \newline & = \underbrace{\mathbf{c}_B^T B^{-1} b}_{\mathbf{x}_B} \newline & = \mathbf{c}_B^T \mathbf{x}_B^{\star} + \mathbf{c}_N^T \mathbf{x}_N^{\star} \newline & = \mathbf{c}^T \mathbf{x}^\star \newline \end{align*} $$ Thus, by corollary of weak duality, $\mathbf{x}^{\star}$ and $\mathbf{y}^{\star}$ are optimal to $(P)$ and $(D)$, respectively. $_\blacksquare$

Remark

If $\mathbf{x}^{\star}$ is an optimal BFS to $(P)$ such that $\tilde{\mathbf{c}}_N^T \geq \mathbf{0}$, then $\mathbf{y}^{\star} = (B^{-1})^T \mathbf{c}_B$ is optimal to $(D)$.

Finite Optimal SolutionUnboundedInfeasible
Finite Optimal Solution$\color{green}{\text{Possible}}$$\color{red}{\text{Impossible}}$$\color{red}{\text{Impossible}}$
Unbounded$\color{red}{\text{Impossible}}$$\color{red}{\text{Impossible}}$$\color{green}{\text{Possible}}$
Infeasible$\color{red}{\text{Impossible}}$$\color{green}{\text{Possible}}$$\color{green}{\text{Possible}}$

Sensitivity Analysis

When solving LP problems, it is often the case that the parameters (i.e., $A$, $\mathbf{b}$, and $\mathbf{c}$) are not known exactly. Thus, it is important to understand how changes in these parameters affect the optimal solution and objective value. This is the goal of sensitivity analysis.

Definition: Sensitivity Analysis in $\mathbf{c}$

Let $\mathbf{x}^{\star}$ (nondegenerate and unique) be an optimal BFS to the primal LP problem $(P)$. Further, consider some arbitrary perturbation, $$ \begin{align*} \min \ & (\mathbf{c} + \Delta \mathbf{c})^T \mathbf{x} \newline \text{subject to} \ & A \mathbf{x} = \mathbf{b} \newline \ & \mathbf{x} \geq \mathbf{0} \end{align*} $$ $\mathbf{x}^{\star}$ remains optimal if and only if, $$ \tilde{\mathbf{c}}_N^T = (\mathbf{c}_N + \Delta \mathbf{c}_N)^T - (\mathbf{c}_B + \Delta \mathbf{c}_B)^T B^{-1} N \geq \mathbf{0} $$

Note

We would ideally like to define a function that maps, $$ w(\mathbf{c}) = \begin{align*} & \min \ \mathbf{c}^T \mathbf{x} \newline & \text{subject to} \ A \mathbf{x} = \mathbf{b} \newline & \mathbf{x} \geq \mathbf{0} \end{align*} $$ If $\Delta \mathbf{c}$ is small enough, then we are interested in $\nabla_{\mathbf{c}} w(\mathbf{c})$, $$ \begin{align*} w(\mathbf{c}) & = \mathbf{c}^T \mathbf{x}^{\star} \newline \nabla_{\mathbf{c}} w(\mathbf{c}) & = \mathbf{x}^{\star} \end{align*} $$ However, this is only valid if $\mathbf{x}^{\star}$ remains optimal for all small perturbations in $\mathbf{c}$.

Definition: Sensitivity Analysis in $\mathbf{b}$

Let $\mathbf{x}^{\star}$ (nondegenerate and unique) be an optimal BFS to the primal LP problem $(P)$. Further, consider some arbitrary perturbation, $$ \begin{align*} \min \ & \mathbf{c}^T \mathbf{x} \newline \text{subject to} \ & A \mathbf{x} = \mathbf{b} + \Delta \mathbf{b} \newline \ & \mathbf{x} \geq \mathbf{0} \end{align*} $$ $\mathbf{x}^{\star}$ remains optimal if and only if, $$ \begin{bmatrix} \mathbf{x}_B \newline \mathbf{x}_N \end{bmatrix} = \begin{bmatrix} B^{-1} (\mathbf{b} + \Delta \mathbf{b}) \newline \mathbf{0} \end{bmatrix} \geq \mathbf{0} $$

Note

We would ideally like to define a function that maps, $$ v(\mathbf{b}) = \begin{align*} & \min \ \mathbf{c}^T \mathbf{x} \newline & \text{subject to} \ A \mathbf{x} = \mathbf{b} \newline & \mathbf{x} \geq \mathbf{0} \end{align*} $$ If $\Delta \mathbf{b}$ is small enough, then we are interested in $\nabla_{\mathbf{b}} v(\mathbf{b})$,

Theorem: Shadow Prize Theorem

If for a given $b$ the optimal BFS of $(P)$ is nondegenerate and unique, then $\nabla_{\mathbf{b}} v(\mathbf{b}) = \mathbf{y}^{\star}$, where $\mathbf{y}^{\star}$ is the optimal solution to the dual problem $(D)$.

Complementarity

Theorem: Complementary Slackness Theorem

Let $\mathbf{x}$ be feasible to the primal LP problem $(P)$ and $\mathbf{y}$ be feasible to the dual LP problem $(D)$. $$ \begin{cases} \mathbf{x} \text{ is optimal to } (P) \newline \mathbf{y} \text{ is optimal to } (D) \end{cases} \newline \iff \newline x_j \left(c_j - (A_j)^T \mathbf{y}\right) = 0, \ \forall j = 1, \ldots, n \newline $$ where $A_j$ is the $j$-th column of $A$.

Corollary: Characterization of Optimal Solutions

$\mathbf{x}$ is optimal to $(P)$ and $\mathbf{y}$ is optimal to $(D)$ if and only if, $$ \begin{cases} A \mathbf{x} = \mathbf{b}, \ \mathbf{x} \geq \mathbf{0} \newline A^T \mathbf{y} \leq \mathbf{c} \newline x_j \left(c_j - (A_j)^T \mathbf{y}\right) = 0, \ \forall j = 1, \ldots, n \end{cases} $$

Algorithm: Alternative View of the Simplex Method
  1. Partition $A = \begin{bmatrix} B & N \end{bmatrix}$ with $\mathbf{x}_B = B^{-1} \mathbf{b}$ and $\mathbf{x}_N = \mathbf{0}$ (BFS), so primally feasible.
  2. For $\begin{bmatrix} B^T \newline N^T \end{bmatrix} \mathbf{y} \leq \begin{bmatrix} \mathbf{c}_B \newline \mathbf{c}_N \end{bmatrix}$, let $\mathbf{y} = (B^{-1})^T \mathbf{c}_B$, this implies complementarity.
  3. Left to check for optimality is dual feasibility, this means, $$ \begin{align*} N^T \mathbf{y} & \leq \mathbf{c}_N \newline N^T (B^{-1})^T \mathbf{c}_B & \leq \mathbf{c}_N \newline \tilde{\mathbf{c}}_N^T & = \mathbf{c}_N^T - \mathbf{c}_B^T B^{-1} N \geq \mathbf{0} \end{align*} $$
  4. If $\tilde{\mathbf{c}}_N^T \geq \mathbf{0}$, then $\mathbf{x}$ and $\mathbf{y}$ are optimal for $(P)$ and $(D)$, respectively.
  5. If $\tilde{\mathbf{c}}_N^T \not\geq \mathbf{0}$, then perform a pivot step to get a new BFS and repeat from step 2.