Medium: DP & Design (31-40)
This lesson combines dynamic programming problems with system design data structures. DP problems test algorithmic thinking, while design problems (LRU Cache, Min Stack) test your ability to compose data structures — both are heavily tested at AI/ML companies.
Problem 31: Coin Change
LeetCode #322 — Given an integer array coins and an integer amount, return the fewest number of coins needed to make up that amount. Return -1 if it cannot be made up.
Pattern: DP (Unbounded Knapsack) — dp[i] = min coins to make amount i.
ML Context: Resource allocation in distributed training (splitting batches across GPUs with minimum communication overhead) follows the same optimization structure as coin change.
def coinChange(coins, amount):
"""
Time: O(amount * len(coins))
Space: O(amount)
"""
dp = [float('inf')] * (amount + 1)
dp[0] = 0
for i in range(1, amount + 1):
for coin in coins:
if coin <= i and dp[i - coin] + 1 < dp[i]:
dp[i] = dp[i - coin] + 1
return dp[amount] if dp[amount] != float('inf') else -1
# coinChange([1,5,11], 15) -> 3 (5+5+5)
# coinChange([2], 3) -> -1
Problem 32: Word Break
LeetCode #139 — Given a string s and a dictionary of strings wordDict, return true if s can be segmented into space-separated dictionary words.
Pattern: DP — dp[i] = true if s[0:i] can be segmented. Check all possible last words.
ML Context: Word segmentation is directly used in NLP tokenizers. BPE and WordPiece tokenization algorithms solve a similar segmentation problem to determine optimal subword splits.
def wordBreak(s, wordDict):
"""
Time: O(n^2 * k) where n = len(s), k = avg word length
Space: O(n)
"""
word_set = set(wordDict)
dp = [False] * (len(s) + 1)
dp[0] = True
for i in range(1, len(s) + 1):
for j in range(i):
if dp[j] and s[j:i] in word_set:
dp[i] = True
break
return dp[len(s)]
# wordBreak("leetcode", ["leet","code"]) -> True
# wordBreak("catsandog", ["cats","dog","sand","and","cat"]) -> False
Problem 33: House Robber
LeetCode #198 — Given an integer array representing money at each house, return the maximum amount you can rob without robbing two adjacent houses.
Pattern: DP — dp[i] = max(dp[i-1], dp[i-2] + nums[i]). At each house, choose to rob or skip.
ML Context: This "take or skip" pattern appears in feature selection where selecting adjacent features may cause multicollinearity, and you want to maximize information gain while avoiding redundancy.
def rob(nums):
"""
Time: O(n), Space: O(1)
"""
if not nums:
return 0
if len(nums) == 1:
return nums[0]
prev2, prev1 = 0, 0
for num in nums:
curr = max(prev1, prev2 + num)
prev2 = prev1
prev1 = curr
return prev1
# rob([1,2,3,1]) -> 4 (rob house 0 and 2)
# rob([2,7,9,3,1]) -> 12 (rob house 0, 2, 4)
Problem 34: Unique Paths
LeetCode #62 — A robot is at the top-left corner of an m x n grid. It can only move right or down. How many unique paths are there to reach the bottom-right corner?
Pattern: 2D DP — dp[i][j] = dp[i-1][j] + dp[i][j-1]. Can be optimized to 1D.
ML Context: Grid-based path counting appears in lattice-based models for sequence labeling, and the 2D DP structure mirrors convolution computations in CNNs.
def uniquePaths(m, n):
"""
Space-optimized 1D DP.
Time: O(m * n), Space: O(n)
"""
dp = [1] * n
for i in range(1, m):
for j in range(1, n):
dp[j] += dp[j - 1]
return dp[n - 1]
# uniquePaths(3, 7) -> 28
# uniquePaths(3, 2) -> 3
Problem 35: Decode Ways
LeetCode #91 — Given a string of digits, return the number of ways to decode it, where 'A' = 1, 'B' = 2, ..., 'Z' = 26.
Pattern: DP similar to Climbing Stairs — At each position, try one-digit and two-digit decodings.
ML Context: Ambiguity in decoding sequences appears in CTC (Connectionist Temporal Classification) loss used in speech recognition, where multiple alignments can produce the same output sequence.
def numDecodings(s):
"""
Time: O(n), Space: O(1)
"""
if not s or s[0] == '0':
return 0
prev2, prev1 = 1, 1
for i in range(1, len(s)):
curr = 0
if s[i] != '0':
curr += prev1 # single digit decode
two_digit = int(s[i-1:i+1])
if 10 <= two_digit <= 26:
curr += prev2 # two digit decode
prev2 = prev1
prev1 = curr
return prev1
# numDecodings("12") -> 2 ("AB" or "L")
# numDecodings("226") -> 3 ("BZ", "VF", "BBF")
# numDecodings("06") -> 0 (invalid: leading zero)
Problem 36: LRU Cache
LeetCode #146 — Design a data structure that follows the Least Recently Used (LRU) cache eviction policy. Implement get and put in O(1) time.
Pattern: Hash Map + Doubly Linked List — Map provides O(1) lookup, DLL provides O(1) reordering.
ML Context: LRU caching is used in model serving to cache inference results for repeated inputs, in embedding table management for large recommendation models, and in KV-cache for transformer inference.
from collections import OrderedDict
class LRUCache:
"""
Using Python's OrderedDict for clean implementation.
All operations: O(1) time.
"""
def __init__(self, capacity):
self.cache = OrderedDict()
self.capacity = capacity
def get(self, key):
if key not in self.cache:
return -1
self.cache.move_to_end(key) # mark as recently used
return self.cache[key]
def put(self, key, value):
if key in self.cache:
self.cache.move_to_end(key)
self.cache[key] = value
if len(self.cache) > self.capacity:
self.cache.popitem(last=False) # evict LRU
# cache = LRUCache(2)
# cache.put(1, 1); cache.put(2, 2)
# cache.get(1) -> 1
# cache.put(3, 3) -> evicts key 2
# cache.get(2) -> -1
Problem 37: Min Stack
LeetCode #155 — Design a stack that supports push, pop, top, and retrieving the minimum element, all in O(1) time.
Pattern: Two Stacks — Main stack for values, auxiliary stack tracking minimum at each level.
ML Context: Tracking running minimums/maximums is used in training monitoring (tracking best validation loss), gradient clipping (tracking gradient norms), and early stopping logic.
class MinStack:
"""All operations: O(1) time, O(n) space."""
def __init__(self):
self.stack = []
self.min_stack = []
def push(self, val):
self.stack.append(val)
min_val = min(val, self.min_stack[-1] if self.min_stack else val)
self.min_stack.append(min_val)
def pop(self):
self.stack.pop()
self.min_stack.pop()
def top(self):
return self.stack[-1]
def getMin(self):
return self.min_stack[-1]
Problem 38: Top K Frequent Elements
LeetCode #347 — Given an integer array and an integer k, return the k most frequent elements.
Pattern: Bucket Sort or Heap — Count frequencies, then find top k.
ML Context: Finding top-k frequent items is used in vocabulary construction for NLP models, popular item recommendation, and frequent pattern mining in user behavior data.
from collections import Counter
def topKFrequent(nums, k):
"""
Bucket sort approach.
Time: O(n), Space: O(n)
"""
count = Counter(nums)
# Buckets: index = frequency, value = list of nums with that frequency
buckets = [[] for _ in range(len(nums) + 1)]
for num, freq in count.items():
buckets[freq].append(num)
result = []
for i in range(len(buckets) - 1, 0, -1):
for num in buckets[i]:
result.append(num)
if len(result) == k:
return result
return result
# topKFrequent([1,1,1,2,2,3], 2) -> [1,2]
Problem 39: Meeting Rooms II
LeetCode #253 — Given an array of meeting time intervals, find the minimum number of conference rooms required.
Pattern: Sort + Min Heap (or Event Sweep) — Track end times of ongoing meetings.
ML Context: GPU scheduling in training clusters is the same problem: given training jobs with start/end times, how many GPUs do you need? This is also used in batch inference request scheduling.
import heapq
def minMeetingRooms(intervals):
"""
Time: O(n log n) for sorting and heap operations
Space: O(n) for the heap
"""
if not intervals:
return 0
intervals.sort(key=lambda x: x[0])
heap = [] # end times of ongoing meetings
for start, end in intervals:
if heap and heap[0] <= start:
heapq.heapreplace(heap, end) # reuse room
else:
heapq.heappush(heap, end) # new room
return len(heap)
# minMeetingRooms([[0,30],[5,10],[15,20]]) -> 2
# minMeetingRooms([[7,10],[2,4]]) -> 1
Problem 40: Task Scheduler
LeetCode #621 — Given a list of tasks (characters) and a cooldown interval n, return the minimum number of intervals the CPU needs to finish all tasks. The same task must have at least n intervals between executions.
Pattern: Greedy + Math — The most frequent task determines the structure; fill idle slots with other tasks.
ML Context: Task scheduling with cooldown constraints appears in rate-limited API calls for data collection, GPU kernel scheduling to avoid resource conflicts, and batch job orchestration in training pipelines.
from collections import Counter
def leastInterval(tasks, n):
"""
Time: O(n) where n = len(tasks)
Space: O(1) - at most 26 task types
"""
counts = Counter(tasks)
max_count = max(counts.values())
# How many tasks have the maximum frequency?
max_count_tasks = sum(1 for c in counts.values() if c == max_count)
# Formula: (max_count - 1) * (n + 1) + max_count_tasks
# But result cannot be less than total number of tasks
result = (max_count - 1) * (n + 1) + max_count_tasks
return max(result, len(tasks))
# leastInterval(["A","A","A","B","B","B"], 2) -> 8
# (A B _ A B _ A B) -> 8 intervals
# leastInterval(["A","A","A","B","B","B"], 0) -> 6
Key Takeaways
- DP problems (Coin Change, Word Break, House Robber, Unique Paths, Decode Ways) all follow the same framework: define state, write recurrence, set base cases, optimize space.
- Design problems (LRU Cache, Min Stack) test your ability to combine data structures to achieve optimal time complexity for multiple operations simultaneously.
- Heap problems (Top K Frequent, Meeting Rooms II) are about maintaining partial order efficiently. When you need the min or max of a dynamic set, think heap.
- Greedy with math (Task Scheduler) is a pattern where the optimal solution can be computed directly from the structure of the input, without simulation.
- Design problems are especially common at AI/ML companies because building production ML systems requires composing data structures for caching, scheduling, and resource management.
Lilly Tech Systems