Really, the only difference between divide and conquer and dynamic programming is that we save our answers to the subproblems in DP. This is because we are solving the same subproblems over and over in DP, so we want to cache the results.
High-level intuition about DP problems
We usually want to ask a high-level question that we want to solve in our algorithm. This will make up the bulk of our solution.
- Decompose the problem into smaller subproblems of similar nature.
- There needs to exist a set of finite possible answers to our question. In addition, for any answer, the task needs to look like the original problem.
Setup the proper notation. We want a way to access previously computed answers/subproblems.
- The -function: stores answers to previously calculated subproblems. It takes in a set of variables (the inputs to our problem) and returns the answer.
- We must consider the recurrence as well as the base case(s).
Proving DP algorithms
Generally, expressing the -function as a generalized, recursive version of itself, including all the base cases takes us 90% of the way there.
General runtime for DP algorithms
Dynamic programming wants you to cache the results of previous subproblems so that you don’t repeat them in future recursions/iterations.
This means that runtime depends on the number of subproblems that we recurse into, times the amount of time we spend in each subproblem.
Example
Given a matrix , if we spend constant time for each cell, then the total runtime is . Of course, this means our -function will also take up space, assuming space for each subproblem.