Advanced

Pruning & Optimization

Raw backtracking explores an exponential search space. The difference between a TLE solution and an accepted one is almost always pruning — cutting branches of the decision tree that cannot lead to valid solutions. This lesson covers the four main optimization strategies: constraint pruning, symmetry breaking, memoization, and branch-and-bound.

Technique 1: Constraint Pruning

Check constraints before recursing, not after. Every invalid branch you avoid saves an entire subtree of computation.

Example: Combination Sum with Sort + Break

def combination_sum_pruned(candidates: list[int], target: int) -> list[list[int]]:
    """Combination sum with aggressive constraint pruning.

    Three levels of pruning:
    1. Sort candidates: enables break on overshoot
    2. Break (not continue) when candidate > remaining
       Because array is sorted, all subsequent candidates also too large
    3. Skip duplicates at same level

    Without pruning: explores ~3^20 nodes for 20 candidates
    With pruning: explores ~100-1000 nodes for same input

    Time: O(n^(T/M)) but drastically reduced in practice
    Space: O(T/M) recursion depth
    """
    candidates.sort()  # Pruning prerequisite
    result = []

    def backtrack(start, path, remaining):
        if remaining == 0:
            result.append(path[:])
            return

        for i in range(start, len(candidates)):
            # PRUNING: sorted array, so break not continue
            if candidates[i] > remaining:
                break  # all future candidates are also too large

            path.append(candidates[i])
            backtrack(i, path, remaining - candidates[i])
            path.pop()

    backtrack(0, [], target)
    return result
💡
Key Insight: break vs continue is the difference between pruning a single branch and pruning an entire subtree. Always ask: "Can I guarantee that all remaining candidates will also fail?"

Technique 2: Symmetry Breaking

Many problems have symmetrical solutions that differ only in ordering. By imposing an ordering constraint, you can avoid exploring equivalent branches.

Example: K Equal Subsets with Symmetry Breaking

def can_partition_symmetry(nums: list[int], k: int) -> bool:
    """Partition K subsets with symmetry breaking.

    Key symmetry insight: if we swap the contents of two
    identically-valued buckets, we get the same partition.
    So we skip buckets with the same current sum.

    Additional optimization: sort descending to fail fast
    (large numbers overflow target earlier).

    Speedup: from O(k^n) to roughly O(k^n / k!) by avoiding
    equivalent bucket orderings.
    """
    total = sum(nums)
    if total % k != 0:
        return False
    target = total // k

    nums.sort(reverse=True)
    if nums[0] > target:
        return False

    buckets = [0] * k

    def backtrack(idx):
        if idx == len(nums):
            return True

        seen = set()  # symmetry breaking: skip identical bucket sums
        for j in range(k):
            if buckets[j] + nums[idx] > target:
                continue
            if buckets[j] in seen:
                continue  # this bucket sum was already tried
            seen.add(buckets[j])

            buckets[j] += nums[idx]
            if backtrack(idx + 1):
                return True
            buckets[j] -= nums[idx]

            # If empty bucket failed, no other empty bucket will work
            if buckets[j] == 0:
                break

        return False

    return backtrack(0)

Technique 3: Memoization (Backtracking + DP)

When backtracking has overlapping subproblems, cache the results of recursive calls. This converts exponential backtracking into polynomial dynamic programming.

When to Add Memoization

SignalExampleCache Key
Same state reached via different pathsSubset sum, word break(index, remaining_sum)
Decision depends only on position + constraintGrid paths, coin change(row, col) or (index, amount)
The recursion tree has shared subtreesFibonacci, climbing stairs(n,)

Example: Fibonacci Without and With Memoization

# WITHOUT memoization: O(2^n)
def fib_backtrack(n: int) -> int:
    """Pure recursion. Exponential because of overlapping calls.
    fib(5) calls fib(3) twice, fib(2) three times, etc.
    """
    if n <= 1:
        return n
    return fib_backtrack(n - 1) + fib_backtrack(n - 2)

# WITH memoization: O(n)
from functools import lru_cache

@lru_cache(maxsize=None)
def fib_memo(n: int) -> int:
    """Memoized recursion. Each unique call computed once.
    Cache stores results for fib(0) through fib(n).

    Time: O(n), Space: O(n)
    """
    if n <= 1:
        return n
    return fib_memo(n - 1) + fib_memo(n - 2)

Example: Memoized Subset Sum

def subset_sum_memo(nums: list[int], target: int) -> bool:
    """Subset sum with memoization.

    Without memo: O(2^n) - each element include/exclude
    With memo: O(n * target) - at most n * target unique states

    The memo key is (index, remaining_target) because the
    answer depends only on which elements remain and what
    sum we still need.
    """
    from functools import lru_cache

    @lru_cache(maxsize=None)
    def dp(idx, remaining):
        if remaining == 0:
            return True
        if idx >= len(nums) or remaining < 0:
            return False
        # Include or exclude nums[idx]
        return dp(idx + 1, remaining - nums[idx]) or dp(idx + 1, remaining)

    return dp(0, target)

# subset_sum_memo([3, 34, 4, 12, 5, 2], 9) -> True
Rule of Thumb: If your backtracking function's return value depends only on a small number of parameters (not the full path), you can add memoization. If the result depends on the specific path taken, memoization usually does not apply.

Technique 4: Branch and Bound

Branch and bound extends backtracking for optimization problems. It maintains a bound (optimistic estimate) and prunes branches that cannot improve on the current best solution.

Example: 0/1 Knapsack with Branch and Bound

def knapsack_branch_bound(weights: list[int], values: list[int],
                          capacity: int) -> int:
    """0/1 Knapsack with branch and bound.

    At each item, compute an upper bound on the value achievable
    from this branch (using fractional relaxation). If the upper
    bound is less than the current best, prune the branch.

    This is exactly how optimization solvers (PuLP, Gurobi) work
    internally for integer programming problems.

    Time: O(2^n) worst case, but pruning makes it practical
    Space: O(n) recursion depth
    """
    n = len(weights)
    best = [0]

    # Sort by value/weight ratio for tighter bounds
    items = sorted(range(n), key=lambda i: values[i] / weights[i],
                   reverse=True)
    sorted_w = [weights[i] for i in items]
    sorted_v = [values[i] for i in items]

    def upper_bound(idx, curr_weight, curr_value):
        """Fractional relaxation bound: greedily fill remaining capacity."""
        bound = curr_value
        remaining = capacity - curr_weight
        for i in range(idx, n):
            if sorted_w[i] <= remaining:
                remaining -= sorted_w[i]
                bound += sorted_v[i]
            else:
                bound += sorted_v[i] * (remaining / sorted_w[i])
                break
        return bound

    def backtrack(idx, curr_weight, curr_value):
        if curr_value > best[0]:
            best[0] = curr_value

        if idx >= n:
            return

        # BOUND: prune if best possible from here cannot beat current best
        if upper_bound(idx, curr_weight, curr_value) <= best[0]:
            return

        # BRANCH 1: include item idx
        if curr_weight + sorted_w[idx] <= capacity:
            backtrack(idx + 1,
                      curr_weight + sorted_w[idx],
                      curr_value + sorted_v[idx])

        # BRANCH 2: exclude item idx
        backtrack(idx + 1, curr_weight, curr_value)

    backtrack(0, 0, 0)
    return best[0]

# knapsack_branch_bound([2, 3, 4, 5], [3, 4, 5, 6], 8) -> 12
💡
ML Context: Branch and bound is the core algorithm behind mixed-integer programming solvers used in ML for optimal feature selection, decision tree optimization (optimal decision trees via MIP), and neural network verification (proving properties about network outputs).

Pruning Techniques Catalog

TechniqueHow It WorksSpeedupExample Problems
Sort + BreakSort candidates, break when current exceeds limitPrunes all subsequent branchesCombination sum, subsets
Duplicate SkippingSort, skip same value at same recursion levelEliminates redundant branchesPermutations II, subsets II
Constraint SetsTrack used rows/cols/diags in O(1) setsO(1) validity check vs O(n)N-Queens, Sudoku
Symmetry BreakingSkip equivalent states (e.g., identical buckets)Reduces by factor of k!Partition K subsets
MemoizationCache results of overlapping subproblemsExponential → polynomialSubset sum, word break
Bound EstimationCompute optimistic bound, prune if worse than bestPrunes non-promising subtreesKnapsack, TSP
Early TerminationReturn immediately when a solution is foundStops search entirelyWord search, Sudoku
Size FeasibilityCheck if remaining elements can fill remaining slotsPrunes infeasible branchesCombinations, IP addresses

When Backtracking Becomes DP

The transition from backtracking to dynamic programming happens when you notice overlapping subproblems. Here is the decision framework:

🔀

Pure Backtracking

Use when the problem requires all solutions (not just count/optimal), and the path matters. No overlapping subproblems. Examples: permutations, N-Queens, Sudoku.

Backtracking + Memo

Use when the problem has overlapping subproblems but you still need the recursive structure. The cache key captures the state fully. Examples: word break II, subset sum.

📊

Full DP (Bottom-Up)

Use when you only need the count or optimal value (not all solutions). Convert memo to a table. Examples: coin change, knapsack (count), unique paths (count).

🛠

Branch and Bound

Use for optimization problems where you need the best solution. Maintain bounds to prune non-promising branches. Examples: TSP, knapsack (optimal), scheduling.

Performance Comparison: Pruning Impact

Problem (n=20)No PruningWith PruningNodes Explored
Subsets1,048,576 nodes~1,048,576 nodesCannot prune (need all)
Combination Sum (target=50)~320 nodes~1,000 nodes99.9% pruned
N-Queens (n=12)~1212 nodes~856,000 nodes99.99% pruned
Sudoku (easy)~950 nodes~100 nodes>99.99% pruned
Knapsack (branch & bound)1,048,576 nodes~500 nodes99.95% pruned
Interview Strategy: Always present the brute-force backtracking first, then add pruning optimizations one at a time. This shows the interviewer your thought process: "This works but is O(k^n). Let me add sort+break pruning to cut branches where the sum exceeds the target. Now let me add duplicate skipping..."