For any given problem, let $S$ be the set of possible solutions, and $f$ be a function that maps a solution to a measure of how “good” it is. We have $f:S→R$.

Usually, we also have a set $K$ of constraints that we must respect, where $K⊆S$. The optimal solution $OPT$ will belong in $K$, and correspond to the maximum or minimum value that $f$ can return from this subset.

$OPT=min/max{f(s)},wheres∈K⊂S$There are many approaches to optimization problems, like DP and Greedy algorithms.

## Exchange argument

Let $G$ be the solution produced by our $ALG$. We want to prove this is the optimal solution. The idea is that we pretend to have another optimal solution $OPT$ and exchange it with a different $OPT_{′}$ that is one more *unit* closer to our greedy solution.

We can have multiple optimal solutions in a solution space, and if we keep exchanging equivalent solutions, we can get to our greedy solution.

More formally, for each greedy algorithm we can prove the following lemma: there exists a $k≥0$ such that $G$ agrees with $OPT$ on $k$ decisions. We want to show that there exists another optimal solution $OPT_{′}$ such that it agrees with $G$ on $k+1$ decisions.

We *prove using induction* that this holds true whenever we increment $k$. Eventually we’ll reach the $n$ in our algorithm and that would prove our greedy solution $G=OPT$.

If there only exists one unique, optimal solution, then $k=n$ and we will see that in our analysis.