DP Fundamentals
Dynamic programming is the single most important topic for coding exams and technical interviews. This lesson teaches you the core concepts from scratch: what DP actually is, the two properties every DP problem must have, and the two approaches (memoization and tabulation) you will use to solve every problem in this course.
What Is Dynamic Programming?
Dynamic programming (DP) is a method for solving problems by breaking them into smaller overlapping subproblems, solving each subproblem once, and storing the results to avoid redundant computation. It is not a specific algorithm — it is a problem-solving strategy.
The name "dynamic programming" was coined by Richard Bellman in the 1950s. He chose the word "dynamic" because it sounded impressive to funding agencies. The name tells you nothing about what it actually does. What it actually does is trade memory for time: you store previously computed results so you never recompute them.
The Two Properties of DP Problems
A problem can be solved with DP if and only if it has both of these properties:
1. Overlapping Subproblems
The same subproblems are solved multiple times in the recursive solution. This is what makes DP profitable — if every subproblem were unique, there would be nothing to cache.
Consider computing Fibonacci(5). The naive recursive solution computes Fibonacci(3) twice, Fibonacci(2) three times, and Fibonacci(1) five times. This redundancy grows exponentially.
# Brute force Fibonacci: O(2^n) time, O(n) space (call stack)
def fib_brute(n):
if n <= 1:
return n
return fib_brute(n - 1) + fib_brute(n - 2)
# fib_brute(5) makes 15 function calls
# fib_brute(30) makes 2,692,537 function calls
# fib_brute(50) would take minutes
2. Optimal Substructure
The optimal solution to the problem can be constructed from optimal solutions to its subproblems. In other words, the recurrence relation holds: the answer to the bigger problem is a function of the answers to smaller problems.
Fibonacci has optimal substructure because F(n) = F(n-1) + F(n-2). The shortest path problem has optimal substructure because the shortest path from A to C through B is the shortest path from A to B plus the shortest path from B to C.
Approach 1: Memoization (Top-Down)
Memoization is the top-down approach: you write the recursive solution first, then add a cache (usually a dictionary or array) to store results of subproblems you have already solved. When a subproblem is encountered again, you return the cached result instead of recomputing it.
# Memoized Fibonacci: O(n) time, O(n) space
def fib_memo(n, memo=None):
if memo is None:
memo = {}
if n <= 1:
return n
if n in memo:
return memo[n]
memo[n] = fib_memo(n - 1, memo) + fib_memo(n - 2, memo)
return memo[n]
# Using Python's built-in decorator (cleaner)
from functools import lru_cache
@lru_cache(maxsize=None)
def fib_memo_v2(n):
if n <= 1:
return n
return fib_memo_v2(n - 1) + fib_memo_v2(n - 2)
# fib_memo(50) returns instantly: 12586269025
# Only 51 unique subproblems, each computed once
Why it works: Without memoization, fib(50) makes over 2^50 calls. With memoization, it makes exactly 51 calls (one for each value from 0 to 50). Every other call hits the cache and returns in O(1). This reduces time complexity from O(2^n) to O(n).
Approach 2: Tabulation (Bottom-Up)
Tabulation is the bottom-up approach: instead of starting from the big problem and recursing down, you start from the smallest subproblems and build up to the answer. You fill a table (usually an array) iteratively.
# Tabulated Fibonacci: O(n) time, O(n) space
def fib_tab(n):
if n <= 1:
return n
dp = [0] * (n + 1)
dp[0] = 0
dp[1] = 1
for i in range(2, n + 1):
dp[i] = dp[i - 1] + dp[i - 2]
return dp[n]
# Space-optimized: O(n) time, O(1) space
def fib_optimized(n):
if n <= 1:
return n
prev2, prev1 = 0, 1
for i in range(2, n + 1):
curr = prev1 + prev2
prev2 = prev1
prev1 = curr
return prev1
# fib_optimized(50) = 12586269025
# Only 2 variables instead of an array of size n
Memoization vs Tabulation: When to Use Which
| Aspect | Memoization (Top-Down) | Tabulation (Bottom-Up) |
|---|---|---|
| Direction | Starts from the original problem, recurses down | Starts from base cases, builds up |
| Implementation | Recursive + cache | Iterative + table |
| Subproblems solved | Only the ones needed (lazy) | All subproblems (eager) |
| Stack overflow risk | Yes, for deep recursion (Python limit ~1000) | No, purely iterative |
| Space optimization | Harder to optimize | Often reducible to O(1) or O(n) |
| Ease of writing | Easier — just add cache to recursion | Requires understanding the fill order |
| Interview recommendation | Start here to get a working solution fast | Convert to this for the optimized version |
When to Use DP: A Checklist
Before trying DP, ask yourself these questions:
- Does the problem ask for an optimal value? (minimum, maximum, longest, shortest, number of ways) — strong signal for DP.
- Can I define the problem recursively? Can I express the answer in terms of smaller versions of the same problem?
- Are subproblems overlapping? Does my recursive tree recompute the same states multiple times?
- Is there optimal substructure? Can I build the optimal solution from optimal solutions to subproblems?
- Is greedy insufficient? If a greedy approach works, DP is overkill. But if greedy fails (you need to consider multiple choices), DP is likely needed.
The DP Problem-Solving Framework
For every problem in this course, we follow the same five steps:
Step 1: Define the State
What information do you need to uniquely describe a subproblem? This becomes your DP function signature or table dimensions. For Fibonacci, the state is just the number n. For knapsack, the state is (item index, remaining capacity).
Step 2: Write the Recurrence
Express the answer for a state in terms of answers to smaller states. This is the core mathematical relationship. For Fibonacci: dp[i] = dp[i-1] + dp[i-2].
Step 3: Identify Base Cases
What are the smallest subproblems you can solve directly without recursion? For Fibonacci: dp[0] = 0, dp[1] = 1. Getting base cases wrong is the most common DP bug.
Step 4: Determine Fill Order
For tabulation, in what order should you fill the table so that when you compute dp[i], all values it depends on are already computed? For Fibonacci: left to right (i = 2, 3, ..., n).
Complete Example: Climbing Stairs Preview
To preview how the framework works, consider this classic problem: you are climbing a staircase with n steps. Each time you can climb 1 or 2 steps. How many distinct ways can you reach the top?
# Step 1: State = number of remaining steps
# Step 2: Recurrence: ways(n) = ways(n-1) + ways(n-2)
# (you can get to step n from step n-1 or step n-2)
# Step 3: Base cases: ways(0) = 1, ways(1) = 1
# Step 4: Fill order: left to right
# Brute force: O(2^n)
def climb_brute(n):
if n <= 1:
return 1
return climb_brute(n - 1) + climb_brute(n - 2)
# Memoization: O(n) time, O(n) space
from functools import lru_cache
@lru_cache(maxsize=None)
def climb_memo(n):
if n <= 1:
return 1
return climb_memo(n - 1) + climb_memo(n - 2)
# Tabulation: O(n) time, O(n) space
def climb_tab(n):
if n <= 1:
return 1
dp = [0] * (n + 1)
dp[0] = 1
dp[1] = 1
for i in range(2, n + 1):
dp[i] = dp[i - 1] + dp[i - 2]
return dp[n]
# Space-optimized: O(n) time, O(1) space
def climb_optimized(n):
if n <= 1:
return 1
prev2, prev1 = 1, 1
for i in range(2, n + 1):
curr = prev1 + prev2
prev2 = prev1
prev1 = curr
return prev1
# climb_optimized(10) = 89
# climb_optimized(45) = 1836311903
Notice how climbing stairs is essentially Fibonacci with different base cases. This is a pattern you will see throughout the course: many DP problems are variations of the same underlying structure.
Key Takeaways
- DP trades memory for time by storing results of overlapping subproblems so they are computed only once.
- A problem needs both overlapping subproblems and optimal substructure to benefit from DP.
- Memoization (top-down) adds a cache to recursion. Tabulation (bottom-up) fills a table iteratively. Both achieve the same time complexity.
- The four-step framework (state, recurrence, base cases, fill order) works for every DP problem.
- In interviews, start with brute force, add memoization, then convert to tabulation with space optimization.
What Is Next
In the next lesson, we dive into 1D DP problems — six classic problems that form the foundation of DP interview questions. Each problem is solved with the brute force to memoization to tabulation progression, with complete Python code and complexity analysis.
Lilly Tech Systems