1D DP Problems
Solve linear sequence problems using single-dimensional state arrays
Introduction to 1D Dynamic Programming
Have you ever felt the frustration of solving a problem the "obvious" way, only to watch your code time out on larger inputs? You write a perfectly logical recursive solution, it works beautifully on small test cases, and then—boom—it takes three minutes to process what should take milliseconds. You're not alone. This experience is exactly what leads thousands of developers to discover dynamic programming (DP), and specifically 1D DP problems, which are your gateway to mastering this essential algorithmic technique. Whether you're preparing for technical interviews or simply want to level up your problem-solving skills, understanding 1D DP will transform how you approach optimization problems. And to help you master these concepts, we've included free flashcards throughout this lesson to reinforce the key ideas as you learn.
But what exactly makes a problem a "1D DP" problem, and why should you care? The answer lies in understanding a fundamental truth about computation: some problems ask you to make the same calculations over and over again. Imagine climbing a staircase where you can take either one or two steps at a time. How many different ways can you reach the 50th step? Your instinct might be to explore every possible path recursively—and you'd be right in principle, but devastatingly wrong in practice. That recursive approach would explore the same partial solutions millions of times, recalculating the same values repeatedly. This is where dynamic programming shines, and 1D DP is where your journey begins.
What Is 1D Dynamic Programming?
Dynamic programming is an algorithmic paradigm that solves complex problems by breaking them down into simpler subproblems and storing the results to avoid redundant calculations. The "1D" in 1D DP refers to the dimensionality of the data structure we use to store these intermediate results—typically a single array or list indexed by one variable.
🎯 Key Principle: Dynamic programming trades space for time. We use extra memory to store solutions to subproblems so we never have to solve the same subproblem twice.
Let's contrast this with other problem-solving approaches you might already know:
Greedy Algorithms make locally optimal choices at each step, hoping to find a global optimum. They're fast (usually O(n) or O(n log n)) but don't work for all problems. For instance, if you're making change with coins of denominations [1, 3, 4] and need to make 6 cents, a greedy approach might choose 4 + 1 + 1 = 3 coins, while the optimal solution is 3 + 3 = 2 coins. Greedy algorithms can't "look ahead" to see if a different choice would be better down the line.
Divide-and-Conquer breaks problems into independent subproblems, solves them separately, and combines the results (like merge sort or quicksort). The crucial difference from DP is that in divide-and-conquer, the subproblems are independent—solving one doesn't help you solve another. In DP, subproblems overlap—the same subproblem appears multiple times in different branches of your recursion tree.
Brute Force Recursion explores all possibilities, which is correct but often exponentially slow. DP transforms this exponential time into polynomial time by remembering what you've already computed.
💡 Mental Model: Think of DP as "smart recursion." You're still breaking the problem into subproblems like recursion does, but you're keeping a notebook (the DP array) where you write down every answer you calculate, so you never have to solve the same problem twice.
The Two Pillars: Identifying 1D DP Problems
Not every problem is suitable for dynamic programming. Two essential characteristics must be present:
1. Optimal Substructure
Optimal substructure means that the optimal solution to the problem can be constructed from optimal solutions to its subproblems. In other words, if you know the best answers to smaller versions of your problem, you can combine them to find the best answer to the larger problem.
Consider the classic staircase problem: "How many ways can you climb to step n if you can take 1 or 2 steps at a time?" The number of ways to reach step n is the sum of:
- Ways to reach step n-1 (then take 1 step)
- Ways to reach step n-2 (then take 2 steps)
This is optimal substructure—the solution to the larger problem (step n) is built directly from solutions to smaller problems (steps n-1 and n-2).
2. Overlapping Subproblems
Overlapping subproblems means that when you break down the problem recursively, you end up solving the same smaller problems multiple times. This is what makes memoization and tabulation so powerful.
Let's visualize this with the Fibonacci sequence:
fib(5)
/ \
fib(4) fib(3)
/ \ / \
fib(3) fib(2) fib(2) fib(1)
/ \ / \
fib(2) fib(1) ...
Notice how fib(3) appears twice, fib(2) appears three times—and this is just for n=5! The overlapping nature grows exponentially with larger inputs.
🤔 Did you know? Computing fib(50) using naive recursion would require over 40 billion function calls, but with dynamic programming, it requires exactly 50 calculations. That's the difference between milliseconds and years of computation!
Common 1D DP Problem Categories
Once you understand the fundamentals, you'll start recognizing 1D DP problems in the wild. They tend to fall into several common patterns:
Pattern 1: Path Counting Problems
Problems: Climbing Stairs, Decode Ways, Unique Paths (1D variant)
Essence: "How many ways can I reach a goal state?"
These problems ask you to count the number of distinct ways to arrive at a particular state. The recurrence relation typically sums the ways to reach the current state from all possible previous states.
## Climbing Stairs: n steps, can take 1 or 2 steps at a time
## How many distinct ways to reach the top?
def climbStairs(n):
if n <= 2:
return n
# dp[i] represents ways to reach step i
dp = [0] * (n + 1)
dp[1] = 1 # One way to reach step 1
dp[2] = 2 # Two ways to reach step 2 (1+1 or 2)
for i in range(3, n + 1):
# Ways to reach step i = ways to reach i-1 + ways to reach i-2
dp[i] = dp[i-1] + dp[i-2]
return dp[n]
💡 Pro Tip: Path counting problems often have recurrence relations that look like variations of the Fibonacci sequence. Once you spot this pattern, you know you're dealing with 1D DP.
Pattern 2: Optimization Problems (Min/Max)
Problems: House Robber, Minimum Cost Climbing Stairs, Coin Change
Essence: "What's the minimum/maximum value I can achieve?"
These problems ask you to optimize some value (minimize cost, maximize profit) while making sequential decisions.
## House Robber: Rob houses along a street, can't rob adjacent houses
## What's the maximum amount you can rob?
def rob(nums):
if not nums:
return 0
if len(nums) == 1:
return nums[0]
n = len(nums)
# dp[i] = maximum amount we can rob from houses 0 to i
dp = [0] * n
dp[0] = nums[0]
dp[1] = max(nums[0], nums[1])
for i in range(2, n):
# Either rob house i (+ max from i-2) or skip it (take max from i-1)
dp[i] = max(dp[i-1], nums[i] + dp[i-2])
return dp[n-1]
The key insight here: at each house, you make a decision (rob or don't rob), and you choose whichever gives you the maximum total.
Pattern 3: Coin Change Variants
Problems: Coin Change (minimum coins), Coin Change 2 (number of ways), Perfect Squares
Essence: "What's the optimal way to compose a target from available elements?"
These problems involve building up to a target value using a set of available options, typically asking for the minimum number of items needed or the number of ways to achieve the target.
## Coin Change: Given coins of different denominations and a total amount,
## return the minimum number of coins needed to make that amount
def coinChange(coins, amount):
# dp[i] = minimum coins needed to make amount i
# Initialize with amount + 1 (impossible value) as a "infinity" marker
dp = [amount + 1] * (amount + 1)
dp[0] = 0 # Zero coins needed to make amount 0
for i in range(1, amount + 1):
# Try using each coin
for coin in coins:
if coin <= i:
# If we use this coin, we need 1 + minimum coins for (i - coin)
dp[i] = min(dp[i], dp[i - coin] + 1)
# If dp[amount] is still impossible value, return -1
return dp[amount] if dp[amount] != amount + 1 else -1
⚠️ Common Mistake 1: Forgetting to initialize DP arrays with appropriate base values. For minimum problems, use infinity (or amount + 1). For counting problems, use 0. For maximum problems, use negative infinity. ⚠️
The Three-Stage Evolution: Recursion → Memoization → Tabulation
Understanding the progression from naive recursion to optimized DP is crucial for developing your problem-solving intuition. Let's trace this journey using the classic Fibonacci problem:
Stage 1: Recursive Solution (Exponential Time)
This is where most people start—the most intuitive but least efficient approach:
def fib_recursive(n):
# Base cases
if n <= 1:
return n
# Recursive relation: fib(n) = fib(n-1) + fib(n-2)
return fib_recursive(n - 1) + fib_recursive(n - 2)
## Time Complexity: O(2^n) - exponential!
## Space Complexity: O(n) - recursion call stack
The Problem: This recalculates the same values exponentially many times. For fib(40), this takes several seconds.
Stage 2: Memoization (Top-Down DP)
Memoization adds a cache to the recursive solution, storing results as we compute them:
def fib_memo(n, memo=None):
# Initialize memo dictionary on first call
if memo is None:
memo = {}
# If we've already calculated this, return the cached result
if n in memo:
return memo[n]
# Base cases
if n <= 1:
return n
# Calculate and store in memo
memo[n] = fib_memo(n - 1, memo) + fib_memo(n - 2, memo)
return memo[n]
## Time Complexity: O(n) - each value calculated once
## Space Complexity: O(n) - memo dictionary + recursion stack
💡 Mental Model: Memoization is like solving a maze with breadcrumbs. When you reach a junction you've visited before, you follow your breadcrumbs back instead of exploring again.
Stage 3: Tabulation (Bottom-Up DP)
Tabulation builds the solution iteratively from the ground up, avoiding recursion entirely:
def fib_tabulation(n):
# Handle base cases
if n <= 1:
return n
# Build table from bottom up
dp = [0] * (n + 1)
dp[0] = 0
dp[1] = 1
for i in range(2, n + 1):
dp[i] = dp[i - 1] + dp[i - 2]
return dp[n]
## Time Complexity: O(n)
## Space Complexity: O(n) - just the dp array, no recursion stack
🎯 Key Principle: Memoization is top-down (start with the big problem, break it down), while tabulation is bottom-up (start with the smallest subproblems, build up). Both have the same time complexity, but tabulation usually has better space complexity and cache performance.
📋 Quick Reference Card: Recursion vs. Memoization vs. Tabulation
| Aspect | 🔴 Recursion | 🟡 Memoization | 🟢 Tabulation |
|---|---|---|---|
| ⏱️ Time | O(2^n) exponential | O(n) linear | O(n) linear |
| 💾 Space | O(n) stack | O(n) cache + stack | O(n) array only |
| 🧠 Thinking | Natural, intuitive | Add caching to recursion | Build from base cases |
| 🐛 Debug | Easy to trace | Moderate | Requires understanding order |
| ⚡ Performance | Terrible for large n | Good | Best (no recursion overhead) |
| 🎯 Use When | Understanding problem | Quick optimization | Production code |
💡 Pro Tip: When solving a new DP problem, start with recursion to understand the problem structure, add memoization to make it efficient, then convert to tabulation if you need maximum performance.
Why 1D DP Is Your Foundation
You might wonder: if there are 2D and multi-dimensional DP problems, why focus so much on 1D? The answer is that 1D DP teaches you the fundamental thinking patterns without overwhelming you with complexity.
Conceptual Clarity: In 1D DP, you track one variable—typically an index or a value you're building toward. This simplicity lets you focus on understanding:
- ✅ How to identify the state
- ✅ How to define the recurrence relation
- ✅ How to set base cases
- ✅ How to determine the iteration order
Building Blocks for Higher Dimensions: Every 2D DP problem is essentially 1D DP extended by one dimension. The classic "Unique Paths" problem (counting paths in a grid) is just "Climbing Stairs" in two dimensions. Once you master:
dp[i] = dp[i-1] + dp[i-2] // 1D
You can easily extend to:
dp[i][j] = dp[i-1][j] + dp[i][j-1] // 2D
Pattern Recognition: Most complex DP problems combine multiple 1D patterns. The famous "Best Time to Buy and Sell Stock" series starts as simple 1D DP but evolves into state machines—yet the core principles remain the same.
Interview Success: In technical interviews, interviewers often start with 1D DP problems to assess your fundamental understanding. Master these, and you'll handle follow-up questions that add complexity.
🧠 Mnemonic: MASTER 1D first:
- Memoize effectively
- Analyze subproblems
- State definition clarity
- Transition relations
- Expand to higher dimensions
- Recognize patterns quickly
The DP Mindset Shift
Learning dynamic programming requires a fundamental shift in how you think about problems. Instead of asking "How do I solve this entire problem at once?" you ask:
❌ Wrong thinking: "I need to find the answer to the whole problem directly."
✅ Correct thinking: "If I knew the answers to slightly smaller versions of this problem, how could I use them to solve the current one?"
This mindset shift is powerful because it transforms seemingly impossible problems into manageable ones. Consider the "Decode Ways" problem: given that 'A'=1, 'B'=2, ..., 'Z'=26, how many ways can you decode the string "226"? (Answer: 3 ways—"BZ", "VF", "BBF")
Instead of trying to enumerate all possibilities at once, you think: "If I'm at position i in the string, how many ways could I have gotten here?" The answer: however many ways I could decode up to position i-1 (then add one digit) plus however many ways I could decode up to position i-2 (then add two digits, if valid).
💡 Real-World Example: Dynamic programming isn't just for coding interviews. It's used in:
- 🧬 Bioinformatics: DNA sequence alignment
- 📊 Finance: Portfolio optimization
- 🎮 Game AI: Decision trees in complex games
- 📱 Autocorrect: Edit distance algorithms
- 🚗 Route Planning: Shortest path with constraints
Every time your phone corrects your typo, that's the edit distance algorithm—a DP classic—at work!
Your Path Forward
As you progress through this lesson, you'll build a systematic framework for recognizing and solving 1D DP problems. You'll learn to:
🔧 Identify whether a problem is suited for DP 🔧 Define your state and recurrence relation 🔧 Implement solutions in multiple ways 🔧 Optimize your space complexity 🔧 Debug when things go wrong
The journey from confusion to mastery in dynamic programming is incredibly rewarding. That moment when you look at a problem and immediately see the DP structure—when the recurrence relation just "clicks"—is one of the most satisfying experiences in algorithm learning.
Remember: every expert at dynamic programming started exactly where you are now. The difference between struggling with DP and mastering it isn't innate talent—it's pattern recognition built through practice. By focusing first on 1D problems, you're building that pattern recognition in the most efficient way possible.
In the next section, we'll dive into the Core Patterns and Problem-Solving Framework, where you'll learn the step-by-step system for approaching any 1D DP problem with confidence. You'll discover the questions to ask, the templates to follow, and the thinking process that transforms confusion into clarity.
🎯 Key Principle: DP mastery isn't about memorizing solutions—it's about internalizing the thinking process. Focus on understanding why each step makes sense, and the specific problems will solve themselves.
Let's begin building your DP intuition, one dimension at a time.
Core Patterns and Problem-Solving Framework
When you first encounter a dynamic programming problem, it can feel overwhelming. You might recognize that DP is needed, but knowing how to systematically break down the problem into a working solution is what separates beginners from masters. This section introduces a five-step framework that transforms seemingly complex 1D DP problems into manageable, solvable puzzles.
The Five-Step Framework
Every successful 1D DP solution follows a predictable pattern. By internalizing these five steps, you'll develop a systematic approach that works across hundreds of different problems:
Step 1: Define the State
↓
Step 2: Derive the Recurrence Relation
↓
Step 3: Identify Base Cases
↓
Step 4: Choose Implementation Strategy
↓
Step 5: Apply Space Optimization
Let's explore each step in depth, using concrete examples to build your intuition.
Step 1: Define the State
🎯 Key Principle: The state definition is the foundation of your entire solution. Get this right, and everything else follows naturally.
The state represents what information you need to solve subproblems. In 1D DP, we typically use dp[i] to store the answer to a subproblem, but what exactly does dp[i] represent?
The state definition varies dramatically based on the problem context:
Pattern A: Position-based states
dp[i]= "the answer considering elements from index 0 to i"- Example: In climbing stairs,
dp[i]= "number of ways to reach step i"
Pattern B: Value-based states
dp[i]= "the answer when the constraint/target equals i"- Example: In coin change,
dp[i]= "minimum coins needed to make amount i"
Pattern C: Length-based states
dp[i]= "the answer for a sequence of length i"- Example: In longest increasing subsequence,
dp[i]= "length of LIS ending at index i"
💡 Mental Model: Ask yourself: "If I knew the answers to all smaller subproblems, what specific information would I need to solve the problem at position/value/length i?"
Let's see this in action with the classic House Robber problem:
You're a robber planning to rob houses along a street. Adjacent houses have security systems connected. Given an array where
nums[i]is the money at house i, what's the maximum amount you can rob without alerting police?
State Definition: dp[i] = maximum money that can be robbed from houses 0 to i (inclusive)
Notice how specific this is. We're not just saying "the answer for i" - we're being precise about what dp[i] actually represents. This precision is crucial because it guides how we build the recurrence relation.
⚠️ Common Mistake 1: Defining the state ambiguously. "dp[i] = answer for i" isn't specific enough. Does it include house i? Exclude it? Both possibilities? Be explicit! ⚠️
Step 2: Derive the Recurrence Relation
The recurrence relation is the heart of dynamic programming. It expresses how to compute dp[i] using previously computed values. This step requires analyzing the decision points in your problem.
🎯 Key Principle: At each position i, identify what choices you can make, then express the optimal solution as a function of those choices.
For the House Robber problem, at each house i, we have exactly two choices:
Choice 1: Rob house i
→ We get nums[i] money
→ We CANNOT rob house i-1
→ Best we can do: nums[i] + dp[i-2]
Choice 2: Don't rob house i
→ We get 0 money from house i
→ We CAN use the best solution up to i-1
→ Best we can do: dp[i-1]
Since we want the maximum, our recurrence relation is:
dp[i] = max(nums[i] + dp[i-2], dp[i-1])
This formula captures the essence of the problem in a single line. Let's visualize how this works:
Houses: [2, 7, 9, 3, 1]
Indices: 0 1 2 3 4
at i=2 (house with $9):
Option 1 (rob): 9 + dp[0] = 9 + 2 = 11
Option 2 (skip): dp[1] = 7
dp[2] = max(11, 7) = 11 ✓
💡 Pro Tip: When deriving recurrence relations, draw out small examples by hand. Write out the choices at each step. The pattern will emerge naturally.
Let's contrast this with another problem: Climbing Stairs.
You're climbing a staircase with n steps. You can climb 1 or 2 steps at a time. How many distinct ways can you reach the top?
State Definition: dp[i] = number of ways to reach step i
Decision Analysis: To reach step i, we could have come from:
- Step i-1 (taking 1 step) → contributes dp[i-1] ways
- Step i-2 (taking 2 steps) → contributes dp[i-2] ways
Recurrence Relation: dp[i] = dp[i-1] + dp[i-2]
Notice the difference: House Robber uses max() because we want the optimal value, while Climbing Stairs uses + because we're counting combinations. The operation in your recurrence relation directly reflects what the problem asks for.
❌ Wrong thinking: "I'll just guess a formula and see if it works." ✅ Correct thinking: "Let me analyze what decisions lead to position i, then combine the results of those decisions appropriately."
Step 3: Identify Base Cases
Base cases are the foundation values that don't depend on other subproblems. They're the anchor points that stop infinite recursion and provide starting values for iteration.
🎯 Key Principle: Base cases should be small enough that you can compute them directly without using the recurrence relation.
For House Robber:
dp[0] = nums[0](only one house, rob it)dp[1] = max(nums[0], nums[1])(two houses, rob the better one)
For Climbing Stairs:
dp[0] = 1(one way to stay at ground: don't move)dp[1] = 1(one way to reach step 1: take one step)dp[2] = 2(two ways: 1+1 or 2)
⚠️ Common Mistake 2: Not handling edge cases. What if the input array has only 1 element? Your base cases must cover all scenarios before your recurrence relation kicks in. ⚠️
💡 Remember: Check how many previous states your recurrence relation uses. If you use dp[i-2], you need at least two base cases.
Step 4: Choose Implementation Strategy
With state, recurrence, and base cases defined, you have two fundamental approaches to implement your solution: top-down with memoization or bottom-up with tabulation.
Top-Down (Memoization)
The top-down approach uses recursion with a cache to store computed results. You start from the problem you want to solve and recursively break it down, storing results as you go.
def rob_topdown(nums):
n = len(nums)
memo = [-1] * n # -1 indicates "not yet computed"
def dp(i):
# Base cases
if i < 0:
return 0
if i == 0:
return nums[0]
# Check if already computed
if memo[i] != -1:
return memo[i]
# Recurrence relation
# Choice 1: rob house i, add to best from i-2
# Choice 2: skip house i, take best from i-1
memo[i] = max(nums[i] + dp(i-2), dp(i-1))
return memo[i]
return dp(n - 1)
## Example usage
print(rob_topdown([2, 7, 9, 3, 1])) # Output: 12 (rob houses 0, 2, 4)
Advantages of top-down:
- 🧠 More intuitive - matches how we naturally think about recursion
- 🎯 Only computes subproblems that are actually needed
- 🔧 Easier to write initially - just add memoization to recursive solution
When to use: Complex problems where not all subproblems may be needed, or when the recursive structure is clearer.
Bottom-Up (Tabulation)
The bottom-up approach uses iteration to fill a table from smallest to largest subproblems. You start from base cases and build up to the final answer.
def rob_bottomup(nums):
n = len(nums)
if n == 0:
return 0
if n == 1:
return nums[0]
# Create DP table
dp = [0] * n
# Base cases
dp[0] = nums[0]
dp[1] = max(nums[0], nums[1])
# Fill table using recurrence relation
for i in range(2, n):
dp[i] = max(nums[i] + dp[i-2], dp[i-1])
return dp[n-1]
## Example usage
print(rob_bottomup([2, 7, 9, 3, 1])) # Output: 12
Advantages of bottom-up:
- ⚡ Usually faster - no recursion overhead
- 🔒 More predictable memory usage
- 📊 Easier to optimize space (see Step 5)
When to use: Most 1D DP problems on LeetCode, especially when all subproblems must be computed anyway.
💡 Pro Tip: Start with top-down when solving a new problem - it's easier to reason about. Once it works, convert to bottom-up for better performance.
Let's see both approaches for another classic problem: Coin Change.
Given coins of different denominations and a target amount, find the minimum number of coins needed to make that amount. Return -1 if impossible.
def coin_change_bottomup(coins, amount):
# State: dp[i] = minimum coins needed to make amount i
dp = [float('inf')] * (amount + 1)
# Base case: 0 coins needed to make amount 0
dp[0] = 0
# Recurrence: for each amount, try each coin
for i in range(1, amount + 1):
for coin in coins:
if coin <= i:
# If we use this coin, we need:
# 1 (this coin) + dp[i-coin] (coins for remaining amount)
dp[i] = min(dp[i], 1 + dp[i - coin])
# If dp[amount] is still infinity, it's impossible
return dp[amount] if dp[amount] != float('inf') else -1
## Example usage
print(coin_change_bottomup([1, 2, 5], 11)) # Output: 3 (5+5+1)
print(coin_change_bottomup([2], 3)) # Output: -1 (impossible)
Notice how the structure remains consistent: define state, establish base case, apply recurrence relation iteratively.
Step 5: Apply Space Optimization
Many 1D DP solutions can be optimized from O(n) space to O(1) by recognizing that we don't need to store all previous values - only the most recent few.
🎯 Key Principle: If your recurrence relation only uses the last k values (like dp[i-1] and dp[i-2]), you only need to keep k variables in memory.
Let's optimize the House Robber solution:
def rob_optimized(nums):
n = len(nums)
if n == 0:
return 0
if n == 1:
return nums[0]
# Instead of dp array, use two variables
# prev2 represents dp[i-2]
# prev1 represents dp[i-1]
prev2 = nums[0]
prev1 = max(nums[0], nums[1])
for i in range(2, n):
# Calculate current using recurrence relation
current = max(nums[i] + prev2, prev1)
# Shift variables for next iteration
prev2 = prev1
prev1 = current
return prev1
## Example usage
print(rob_optimized([2, 7, 9, 3, 1])) # Output: 12
The logic is identical, but we've reduced space from O(n) to O(1)!
Optimization Pattern Recognition:
Recurrence uses: Keep in memory:
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
dp[i-1] → 1 variable
dp[i-1], dp[i-2] → 2 variables
dp[i-1], dp[i-2], dp[i-3] → 3 variables
💡 Mental Model: Think of it as a sliding window over your DP array. You only keep what's visible in the window.
⚠️ Common Mistake 3: Trying to optimize space before getting the basic solution working. First make it work, then make it optimal! ⚠️
Practical Application: The Complete Framework in Action
Let's apply all five steps to a fresh problem: Decode Ways.
A message containing letters A-Z is encoded as numbers: 'A'=1, 'B'=2, ..., 'Z'=26. Given a string of digits, count how many ways it can be decoded.
Step 1: Define State
dp[i]= number of ways to decode the substring from index 0 to i (inclusive)
Step 2: Derive Recurrence At position i, we can:
- Decode digit s[i] as a single letter (if s[i] ≠ '0') → contributes dp[i-1] ways
- Decode s[i-1:i+1] as a two-digit letter (if valid: 10-26) → contributes dp[i-2] ways
dp[i] = (dp[i-1] if s[i] valid) + (dp[i-2] if s[i-1:i+1] valid)
Step 3: Base Cases
dp[0]= 1 if s[0] ≠ '0', else 0dp[1]= depends on whether s[0] and s[1] form valid decodings
Step 4: Implementation (Bottom-up)
def num_decodings(s):
if not s or s[0] == '0':
return 0
n = len(s)
dp = [0] * (n + 1)
# Base cases
dp[0] = 1 # Empty string has one way
dp[1] = 1 # First character (already checked it's not '0')
for i in range(2, n + 1):
# Check single digit decode
one_digit = int(s[i-1:i])
if 1 <= one_digit <= 9:
dp[i] += dp[i-1]
# Check two digit decode
two_digits = int(s[i-2:i])
if 10 <= two_digits <= 26:
dp[i] += dp[i-2]
return dp[n]
## Example usage
print(num_decodings("226")) # Output: 3 ("BZ", "VF", "BBF")
print(num_decodings("06")) # Output: 0 (invalid)
Step 5: Space Optimization
Since we only use dp[i-1] and dp[i-2], we can reduce to two variables:
def num_decodings_optimized(s):
if not s or s[0] == '0':
return 0
n = len(s)
prev2 = 1 # dp[i-2]
prev1 = 1 # dp[i-1]
for i in range(2, n + 1):
current = 0
if 1 <= int(s[i-1:i]) <= 9:
current += prev1
if 10 <= int(s[i-2:i]) <= 26:
current += prev2
prev2 = prev1
prev1 = current
return prev1
Framework Summary
📋 Quick Reference Card: The 5-Step DP Framework
| Step | 🎯 Focus | ✅ Checklist |
|---|---|---|
| 1️⃣ State | What does dp[i] mean? | Be specific and precise |
| 2️⃣ Recurrence | How to compute dp[i]? | Analyze all choices at position i |
| 3️⃣ Base Cases | What are the anchors? | Cover all scenarios before recurrence |
| 4️⃣ Implementation | Top-down or bottom-up? | Start top-down, convert to bottom-up |
| 5️⃣ Optimization | Can we reduce space? | Count how many previous values used |
🧠 Mnemonic: S-R-B-I-O - "Smart Robots Build Incredible Optimizations"
💡 Real-World Example: Think of DP like planning a road trip. Your state is your current city (position), your recurrence is "which cities can I reach from here?", your base case is your starting point, your implementation is whether you plan forward or backward from destination, and optimization is realizing you only need your current map page, not the entire atlas.
By mastering this five-step framework, you've equipped yourself with a systematic approach that works across the entire spectrum of 1D DP problems. In the next section, we'll explore specific problem patterns and see how this framework adapts to each one.
Essential 1D DP Problem Patterns with Implementations
Now that we understand the theory behind 1D dynamic programming, let's dive into the concrete patterns you'll encounter repeatedly on LeetCode and in real-world applications. Each pattern represents a distinct way of thinking about state transitions and optimal substructure. By mastering these patterns through complete implementations, you'll develop an intuition for recognizing which approach to use when facing a new problem.
Pattern 1: Linear Sequence Problems
Linear sequence problems are the gateway to 1D DP mastery. These problems ask you to make decisions at each step in a sequence, where each decision depends only on previous states. The state transition is straightforward: to solve position i, we look back at one or more previous positions.
Climbing Stairs: The Foundation Pattern
The classic Climbing Stairs problem asks: "You're climbing a staircase with n steps. You can climb either 1 or 2 steps at a time. How many distinct ways can you reach the top?"
🎯 Key Principle: At each step, we can only arrive from the previous step or two steps back. This gives us our recurrence relation: ways[i] = ways[i-1] + ways[i-2].
Let's visualize this for n = 5:
Step: 0 1 2 3 4 5
| | | | | |
Ways: 1 -> 1 -> 2 -> 3 -> 5 -> 8
↑ ↑ ↑ ↑ ↑
base +----+----+----+----+
Each step sums previous two
Here's the memoization approach (top-down):
def climbStairs(n: int) -> int:
memo = {}
def climb(step):
# Base cases: 0 steps = 1 way, 1 step = 1 way
if step <= 1:
return 1
# Check if we've already computed this
if step in memo:
return memo[step]
# Recurrence: ways from (step-1) + ways from (step-2)
memo[step] = climb(step - 1) + climb(step - 2)
return memo[step]
return climb(n)
And the tabulation approach (bottom-up):
def climbStairs(n: int) -> int:
if n <= 1:
return 1
# dp[i] represents ways to reach step i
dp = [0] * (n + 1)
dp[0] = 1 # One way to stay at ground
dp[1] = 1 # One way to reach first step
# Build up from smaller subproblems
for i in range(2, n + 1):
dp[i] = dp[i - 1] + dp[i - 2]
return dp[n]
💡 Pro Tip: Notice how tabulation often uses less memory overhead and eliminates recursion stack concerns. For problems with large n, tabulation is usually preferred.
Min Cost Climbing Stairs: Adding Optimization
This variant adds costs to each step, and you want to find the minimum cost to reach the top. You can start from step 0 or 1.
🎯 Key Principle: Now our state represents the minimum cost to reach position i, not just the number of ways. Our recurrence becomes: dp[i] = cost[i] + min(dp[i-1], dp[i-2]).
def minCostClimbingStairs(cost: list[int]) -> int:
n = len(cost)
if n <= 1:
return 0
# dp[i] = minimum cost to reach step i
dp = [0] * (n + 1)
# Can start from step 0 or 1 for free
dp[0] = 0
dp[1] = 0
for i in range(2, n + 1):
# Take minimum of coming from previous two steps
# Add the cost of the step we're coming FROM
dp[i] = min(dp[i - 1] + cost[i - 1],
dp[i - 2] + cost[i - 2])
return dp[n]
⚠️ Common Mistake: Confusing when to add the cost. We add the cost of the step we're leaving from, not arriving at. Think of it as "paying to use" that step as a launching point.
Pattern 2: Decision-Making with Constraints
The House Robber pattern introduces constraint-based decision making. At each position, you must decide whether to include or exclude an element, but with restrictions on which elements can be chosen together.
House Robber I: Non-Adjacent Selection
You're a robber planning to rob houses along a street. Each house has a certain amount of money. The constraint: you cannot rob two adjacent houses (alarms will trigger). What's the maximum money you can rob?
🎯 Key Principle: At each house i, we face a choice:
- Rob this house: Take
nums[i] + dp[i-2](can't use previous house) - Skip this house: Take
dp[i-1](maximum up to previous house)
Visualization for nums = [2, 7, 9, 3, 1]:
House: 0 1 2 3 4
Value: 2 7 9 3 1
| | | | |
dp: 2 -> 7 -> 11-> 11-> 12
At index 2: max(9+dp[0], dp[1]) = max(9+2, 7) = 11 ✓
At index 3: max(3+dp[1], dp[2]) = max(3+7, 11) = 11
At index 4: max(1+dp[2], dp[3]) = max(1+11, 11) = 12 ✓
Tabulation implementation:
def rob(nums: list[int]) -> int:
if not nums:
return 0
if len(nums) == 1:
return nums[0]
n = len(nums)
dp = [0] * n
dp[0] = nums[0]
dp[1] = max(nums[0], nums[1]) # Best of first two
for i in range(2, n):
# Rob current + best from two back, OR skip current
dp[i] = max(nums[i] + dp[i - 2], dp[i - 1])
return dp[n - 1]
💡 Space Optimization: Since we only ever look back 2 positions, we can reduce space to O(1):
def rob(nums: list[int]) -> int:
if not nums:
return 0
prev2 = 0 # dp[i-2]
prev1 = 0 # dp[i-1]
for num in nums:
current = max(num + prev2, prev1)
prev2 = prev1
prev1 = current
return prev1
House Robber II: Handling Circular Arrays
Now the houses are arranged in a circle, meaning the first and last houses are adjacent. You cannot rob both.
🧠 Mental Model: Break the circle into two linear problems:
- Problem A: Rob houses
[0, n-2](exclude last house) - Problem B: Rob houses
[1, n-1](exclude first house)
The answer is the maximum of these two scenarios.
def robCircular(nums: list[int]) -> int:
if len(nums) == 1:
return nums[0]
def rob_linear(houses):
prev2, prev1 = 0, 0
for house in houses:
current = max(house + prev2, prev1)
prev2, prev1 = prev1, current
return prev1
# Max of: excluding last house OR excluding first house
return max(rob_linear(nums[:-1]), rob_linear(nums[1:]))
⚠️ Common Mistake: Trying to handle the circular constraint in a single pass. The two-pass approach is cleaner and less error-prone.
Pattern 3: Optimization Problems
Optimization problems ask for either the minimum/maximum value or the number of ways to achieve something. These often involve unbounded choices where you can use elements multiple times.
Coin Change: Minimum Coins
Given coins of different denominations and a total amount, find the fewest number of coins needed to make that amount.
🎯 Key Principle: For each amount, try every coin that fits, and take the minimum. The recurrence: dp[amount] = min(dp[amount - coin] + 1) for all valid coins.
Visualization for coins = [1, 2, 5], amount = 11:
Amount: 0 1 2 3 4 5 6 7 8 9 10 11
dp: 0 1 1 2 2 1 2 2 3 3 2 3
↑
For 11: Try coins [1,2,5]
- Use coin 1: dp[11-1] + 1 = dp[10] + 1 = 2+1 = 3
- Use coin 2: dp[11-2] + 1 = dp[9] + 1 = 3+1 = 4
- Use coin 5: dp[11-5] + 1 = dp[6] + 1 = 2+1 = 3
Minimum = 3 (either 1+5+5 or 5+5+1)
Tabulation implementation:
def coinChange(coins: list[int], amount: int) -> int:
# dp[i] = minimum coins needed for amount i
dp = [float('inf')] * (amount + 1)
dp[0] = 0 # Zero coins needed for amount 0
# For each amount from 1 to target
for i in range(1, amount + 1):
# Try each coin
for coin in coins:
if coin <= i:
# Take minimum of current vs using this coin
dp[i] = min(dp[i], dp[i - coin] + 1)
# If still infinity, no solution exists
return dp[amount] if dp[amount] != float('inf') else -1
💡 Pro Tip: Initialize with float('inf') to distinguish between "computed" and "impossible" states. This makes the min operation work correctly.
Coin Change 2: Counting Ways
Same setup, but now count the number of different combinations that make up the amount.
🎯 Key Principle: This is a combination problem, not permutation. We must avoid counting [1,2] and [2,1] as different. The trick: iterate through coins in the outer loop.
def change(amount: int, coins: list[int]) -> int:
# dp[i] = number of ways to make amount i
dp = [0] * (amount + 1)
dp[0] = 1 # One way to make 0: use no coins
# Iterate coins first (outer loop) to avoid duplicates
for coin in coins:
# For each coin, update all amounts it can contribute to
for i in range(coin, amount + 1):
dp[i] += dp[i - coin]
return dp[amount]
⚠️ Critical Difference: In Coin Change 1 (minimum coins), the loop order doesn't matter. In Coin Change 2 (number of ways), coins MUST be the outer loop to generate combinations, not permutations.
❌ Wrong thinking: "I'll just count all ways and divide by duplicates." ✅ Correct thinking: "I'll structure my loops so duplicates never form in the first place."
Pattern 4: String-Based 1D DP
String problems often involve deciding how to partition or decode a string, with each decision depending on previous decisions.
Decode Ways: Counting Valid Partitions
A message containing letters A-Z is encoded as '1'-'26'. Given a digit string, count how many ways it can be decoded. For example, "226" can be decoded as "BZ" (2,26), "VF" (22,6), or "BBF" (2,2,6).
🎯 Key Principle: At each position, we can either:
- Decode a single digit (if valid: 1-9)
- Decode two digits (if valid: 10-26)
The recurrence: dp[i] = dp[i-1] (if single valid) + dp[i-2] (if double valid)
Visualization for s = "226":
String: "" "2" "22" "226"
Index: 0 1 2 3
dp: 1 1 2 3
↑
At position 3 ('6'):
- Single digit '6': valid (1-9) → add dp[2] = 2
- Double digit '26': valid (10-26) → add dp[1] = 1
Total: dp[3] = 2 + 1 = 3
Implementation with careful boundary handling:
def numDecodings(s: str) -> int:
if not s or s[0] == '0':
return 0
n = len(s)
# dp[i] = number of ways to decode s[0:i]
dp = [0] * (n + 1)
dp[0] = 1 # Empty string has one way
dp[1] = 1 # First character (already checked not '0')
for i in range(2, n + 1):
# Check single digit: s[i-1]
one_digit = int(s[i - 1])
if 1 <= one_digit <= 9:
dp[i] += dp[i - 1]
# Check two digits: s[i-2:i]
two_digits = int(s[i - 2:i])
if 10 <= two_digits <= 26:
dp[i] += dp[i - 2]
return dp[n]
⚠️ Common Mistake: Forgetting that '0' is only valid in two-digit codes (10, 20). A standalone '0' or '00' makes decoding impossible.
Word Break: Dictionary-Based Partitioning
Given a string and a dictionary of words, determine if the string can be segmented into space-separated dictionary words.
Example: s = "leetcode", wordDict = ["leet", "code"] → true
🎯 Key Principle: At each position i, check if any word ending at i is in the dictionary AND the remaining prefix is breakable. The recurrence: dp[i] = true if any j where dp[j] is true and s[j:i] is in dictionary.
def wordBreak(s: str, wordDict: list[str]) -> bool:
word_set = set(wordDict) # O(1) lookup
n = len(s)
# dp[i] = True if s[0:i] can be segmented
dp = [False] * (n + 1)
dp[0] = True # Empty string is valid
for i in range(1, n + 1):
# Check all possible last words ending at position i
for j in range(i):
# If prefix s[0:j] is valid AND s[j:i] is a word
if dp[j] and s[j:i] in word_set:
dp[i] = True
break # Found one way, that's enough
return dp[n]
💡 Optimization: Instead of checking all j values, you can limit the range based on the longest word in the dictionary.
Comparing Memoization vs Tabulation
Let's consolidate the differences between the two main DP implementation approaches:
📋 Quick Reference Card:
| Aspect | 🔄 Memoization | 📊 Tabulation |
|---|---|---|
| 🎯 Direction | Top-down (recursive) | Bottom-up (iterative) |
| 💾 When to use | Natural recursion, not all states needed | All states needed, avoid recursion |
| 🧠 Intuition | Easier to derive from recursion | Requires understanding build order |
| ⚡ Performance | Recursion overhead | Usually faster |
| 🔒 Space | Call stack + cache | Just DP array |
| 🐛 Debugging | Stack traces helpful | Easier to trace values |
🧠 Mnemonic: "Memoization uses Memory to cache, Tabulation builds a Table from ground up."
💡 Remember: For interviews, being able to code both approaches shows depth of understanding. Start with the approach that feels more natural, then optimize.
Pattern Recognition Summary
When you encounter a new 1D DP problem, ask yourself:
🔧 Diagnostic Questions:
- Am I counting ways or optimizing a value? → Determines if you're summing paths or taking min/max
- Can I use elements multiple times? → Unbounded (Coin Change) vs bounded (House Robber)
- Are there adjacency constraints? → Affects which previous states you can reference
- Is the input linear or circular? → May need multiple passes
- Am I partitioning/grouping elements? → Watch for combination vs permutation issues
By mastering these six patterns—linear sequences, decision-making with constraints, minimum/maximum optimization, counting ways, string decoding, and string partitioning—you'll have the tools to tackle the vast majority of 1D DP problems on LeetCode.
Common Pitfalls and Debugging Strategies
Even experienced developers stumble when solving 1D DP problems. The subtle nature of these bugs can turn a seemingly correct solution into hours of frustration. Let's systematically explore the most common pitfalls and equip you with debugging strategies that will save you time and mental energy.
Off-by-One Errors: The Silent Killer
Off-by-one errors are the most frequent bugs in dynamic programming, and they're particularly insidious because your code might pass some test cases while mysteriously failing others. These errors typically manifest in three places: array indexing, loop boundaries, and the relationship between problem size and DP array size.
Consider the classic Climbing Stairs problem where you can take 1 or 2 steps at a time. Here's where things go wrong:
## ⚠️ Common Mistake 1: Wrong array size ⚠️
def climb_stairs_wrong(n):
if n <= 2:
return n
dp = [0] * n # Bug: Should be n+1!
dp[1] = 1
dp[2] = 2
for i in range(3, n): # Bug: Should be n+1!
dp[i] = dp[i-1] + dp[i-2]
return dp[n-1] # Bug: Should be dp[n]!
## ✅ Correct version
def climb_stairs_correct(n):
if n <= 2:
return n
dp = [0] * (n + 1) # Array of size n+1
dp[1] = 1
dp[2] = 2
for i in range(3, n + 1): # Loop to n+1 (exclusive)
dp[i] = dp[i-1] + dp[i-2]
return dp[n] # Return element at index n
🎯 Key Principle: When your problem involves computing the answer for position n, you typically need a DP array of size n+1 to use 1-indexed positioning, which aligns naturally with problem statements.
💡 Mental Model: Draw a simple diagram for small inputs:
For n=4 (4 stairs):
Index: 0 1 2 3 4
Value: 0 1 2 3 5
^ ^ ^ ^
unused base base answer
Notice how index 0 is often unused or serves as a padding element, while indices 1 through n hold the actual solutions. This is why we need n+1 space.
⚠️ Common Mistake: Confusing the problem size with array indices. If the problem asks "how many ways to reach step n?", that's asking about position n, not index n-1.
💡 Pro Tip: Always test your solution with n=1 and n=2 manually. Trace through your array indices on paper. If you need to access dp[i-2] in your loop, make sure your loop starts at index 2 or higher, and that index exists in your array.
Base Case Initialization: Getting the Foundation Right
Your dynamic programming solution is only as good as its foundation. Base case initialization errors cascade through your entire solution, producing answers that might look plausible but are completely wrong.
Let's examine the House Robber problem (can't rob adjacent houses) to see how base case errors propagate:
## ⚠️ Common Mistake 2: Wrong base case ⚠️
def rob_wrong(nums):
if not nums:
return 0
if len(nums) == 1:
return nums[0]
dp = [0] * len(nums)
dp[0] = nums[0]
dp[1] = nums[1] # Bug: Should be max(nums[0], nums[1])!
for i in range(2, len(nums)):
dp[i] = max(dp[i-1], dp[i-2] + nums[i])
return dp[-1]
## Test case that exposes the bug:
## rob_wrong([2, 1, 1, 2]) returns 3, but correct answer is 4
## ✅ Correct version
def rob_correct(nums):
if not nums:
return 0
if len(nums) == 1:
return nums[0]
dp = [0] * len(nums)
dp[0] = nums[0]
dp[1] = max(nums[0], nums[1]) # Must choose the better option!
for i in range(2, len(nums)):
dp[i] = max(dp[i-1], dp[i-2] + nums[i])
return dp[-1]
❌ Wrong thinking: "dp[1] represents robbing house 1, so it should be nums[1]"
✅ Correct thinking: "dp[i] represents the maximum money we can rob considering houses 0 through i, so dp[1] must be the best choice between robbing house 0 OR house 1"
🧠 Mnemonic: "Base cases aren't just values—they're optimal subproblem solutions." Every DP entry must represent the optimal answer to its subproblem, including the very first entries.
💡 Debugging Strategy: When you get a wrong answer, manually compute the first 3-4 values of your DP array for a small test case. Compare your hand-calculated values with what your code produces. The divergence point reveals your bug.
Test: nums = [2, 1, 1, 2]
Manual calculation:
dp[0] = 2 (rob house 0)
dp[1] = max(2, 1) = 2 (better to rob house 0)
dp[2] = max(2, 0+1) = 2 (still better not to rob house 2)
dp[3] = max(2, 2+2) = 4 (rob houses 0 and 3)
If your code shows dp[1] = 1, you've found your bug!
Edge Cases: The Forgotten Scenarios
Edge cases are where many solutions crumble. These are the boundary conditions that don't fit the general pattern of your algorithm. Let's categorize them:
1. Empty Input
def max_subarray_sum(nums):
# ⚠️ What if nums is empty? ⚠️
if not nums: # Always check!
return 0 # or raise exception, or return -infinity
dp = [0] * len(nums)
dp[0] = nums[0]
# ... rest of solution
2. Single Element
def longest_increasing_subsequence(nums):
if not nums:
return 0
if len(nums) == 1:
return 1 # Single element is always an increasing sequence
# ... rest of solution
3. Maximum Constraints
Consider when n = 10,000 or when array values are at their maximum (like 10^9). Will your algorithm:
- Overflow integer limits? (Less common in Python, but watch for this in C++/Java)
- Exceed time limits?
- Cause stack overflow with recursion?
## Example: Coin change with large amount
def coin_change(coins, amount):
# Initialize with amount + 1 (impossible value)
# NOT with float('inf') which could cause overflow in comparisons
dp = [amount + 1] * (amount + 1)
dp[0] = 0
for i in range(1, amount + 1):
for coin in coins:
if i >= coin:
dp[i] = min(dp[i], dp[i - coin] + 1)
# Handle case where amount cannot be made
return dp[amount] if dp[amount] != amount + 1 else -1
💡 Pro Tip: Create a mental checklist for edge cases:
- 🔒 Empty input (n=0, empty array)
- 🔒 Single element (n=1)
- 🔒 All elements identical
- 🔒 Maximum constraints (n=104, values=109)
- 🔒 Negative numbers (if applicable)
- 🔒 Zero values
Confusing Problem Types: Counting vs. Optimization
One of the most subtle pitfalls is mixing up counting problems ("how many ways?") with optimization problems ("what's the minimum/maximum?"). These require fundamentally different DP transitions.
Counting Problems ("Number of Ways")
In counting problems, you add possibilities:
## Problem: Count ways to climb n stairs (1 or 2 steps at a time)
def count_ways(n):
if n <= 2:
return n
dp = [0] * (n + 1)
dp[1] = 1 # One way to reach step 1
dp[2] = 2 # Two ways to reach step 2
for i in range(3, n + 1):
# ADD the number of ways from previous positions
dp[i] = dp[i-1] + dp[i-2]
return dp[n]
Optimization Problems ("Min/Max Value")
In optimization problems, you choose the best option:
## Problem: Minimum cost to climb n stairs (costs given for each step)
def min_cost_climbing(cost):
n = len(cost)
if n <= 2:
return min(cost)
dp = [0] * (n + 1)
dp[0] = 0 # Can start at step 0
dp[1] = 0 # Can start at step 1
for i in range(2, n + 1):
# CHOOSE minimum cost from previous positions
dp[i] = min(dp[i-1] + cost[i-1], dp[i-2] + cost[i-2])
return dp[n]
⚠️ Common Mistake: Using min() in a counting problem or using + in an optimization problem.
❌ Wrong: dp[i] = min(dp[i-1], dp[i-2]) // This doesn't count ways!
✅ Correct: dp[i] = dp[i-1] + dp[i-2] // Sum of ways
❌ Wrong: dp[i] = dp[i-1] + dp[i-2] // This doesn't optimize!
✅ Correct: dp[i] = min(dp[i-1] + cost[i-1], dp[i-2] + cost[i-2])
🎯 Key Principle: The recurrence relation must match the problem type:
- Counting: Sum all valid paths → use
+ - Optimization: Choose best path → use
min()ormax()
📋 Quick Reference Card:
| 🎯 Problem Type | 🔧 Transition Formula | 📝 Example |
|---|---|---|
| 🔢 Count ways | dp[i] = dp[i-1] + dp[i-2] + ... |
Climbing stairs, decode ways |
| ⬇️ Minimize cost | dp[i] = min(dp[i-1] + cost1, dp[i-2] + cost2, ...) |
Min cost climbing, coin change |
| ⬆️ Maximize value | dp[i] = max(dp[i-1] + value1, dp[i-2] + value2, ...) |
House robber, max subarray |
| ✅ True/False | dp[i] = dp[i-1] OR dp[i-2] OR ... |
Word break, partition |
Performance Issues: Memoization vs. Tabulation
The choice between memoization (top-down) and tabulation (bottom-up) isn't just stylistic—it has real performance implications.
Stack Overflow with Deep Recursion
Memoization uses recursion, which consumes stack space. For large inputs, this can crash:
## ⚠️ May cause stack overflow for large n ⚠️
def fib_memo(n, cache={}):
if n <= 1:
return n
if n in cache:
return cache[n]
cache[n] = fib_memo(n-1, cache) + fib_memo(n-2, cache)
return cache[n]
## fib_memo(10000) → RecursionError!
💡 Mental Model: Recursion is like stacking plates. Each recursive call adds a plate to the stack. Eventually, the stack topples (stack overflow). Iteration is like processing plates from a queue—no stacking, no overflow.
When to use memoization:
- 🧠 Problem naturally thinks "top-down" (compute answer, break into subproblems)
- 🧠 Not all subproblems are needed (sparse DP)
- 🧠 Input size is small (n < 1000 is usually safe)
- 🧠 You want to write code quickly during interviews
When to switch to tabulation:
- 🔧 Getting stack overflow errors
- 🔧 Input size is large (n > 1000)
- 🔧 All subproblems must be solved anyway
- 🔧 Need guaranteed O(n) space without recursion overhead
Converting from Memoization to Tabulation
Here's a systematic approach:
## Step 1: Start with memoization
def climb_stairs_memo(n, cache={}):
if n <= 2:
return n
if n in cache:
return cache[n]
cache[n] = climb_stairs_memo(n-1, cache) + climb_stairs_memo(n-2, cache)
return cache[n]
## Step 2: Convert to tabulation
## - Replace recursion with iteration
## - Build solutions from base cases upward
## - Access array instead of recursive calls
def climb_stairs_tab(n):
if n <= 2:
return n
dp = [0] * (n + 1)
dp[1] = 1 # Base case from recursion
dp[2] = 2 # Base case from recursion
for i in range(3, n + 1):
# This mirrors the recursive formula:
# was: cache[n] = climb_stairs_memo(n-1) + climb_stairs_memo(n-2)
# now: dp[i] = dp[i-1] + dp[i-2]
dp[i] = dp[i-1] + dp[i-2]
return dp[n]
🤔 Did you know? Python's default recursion limit is around 1000. You can increase it with sys.setrecursionlimit(), but this doesn't actually give you more stack space—it just delays the crash! Better to use tabulation.
Systematic Debugging Process
When your DP solution fails, follow this debugging framework:
Step 1: Verify Your DP State Definition
Write down explicitly:
"dp[i] represents: ___________"
Example:
"dp[i] represents the maximum money we can rob
considering houses from index 0 to i (inclusive)"
If you can't clearly articulate what dp[i] means, your solution is built on sand.
Step 2: Test Base Cases Manually
Create a trace table for the smallest possible inputs:
Input: [2, 7, 9, 3, 1]
Manual trace:
i=0: dp[0] = 2 (rob house 0)
i=1: dp[1] = max(2, 7) = 7 (rob house 1)
i=2: dp[2] = max(7, 2+9) = 11 (rob houses 0 and 2)
i=3: dp[3] = max(11, 7+3) = 11 (skip house 3)
i=4: dp[4] = max(11, 11+1) = 12 (rob houses 0, 2, and 4)
Run your code and compare: does it produce these exact values?
Step 3: Check Array Bounds
Insert debug prints to verify you're not accessing invalid indices:
def debug_dp(nums):
n = len(nums)
dp = [0] * (n + 1) # +1 to avoid off-by-one
print(f"Array size: {len(dp)}, Indices: 0 to {len(dp)-1}")
for i in range(n):
print(f"Accessing: dp[{i}], dp[{i-1}], dp[{i-2}]")
# If this prints a negative index, you found your bug!
if i-2 >= 0:
dp[i] = dp[i-1] + dp[i-2]
Step 4: Validate the Recurrence Relation
Pick a middle element and verify the transition:
For House Robber at i=3:
Manual: dp[3] = max(dp[2], dp[1] + nums[3])
= max(11, 7 + 3) = 11 ✓
Code produces: dp[3] = ?
If different, check:
- Are you using the right indices (i-1, i-2)?
- Are you applying the right operation (max, min, +)?
- Did you add the right value (nums[i], cost[i])?
💡 Pro Tip: Use a debugger or print statements liberally. There's no shame in adding:
for i in range(2, n):
rob_current = dp[i-2] + nums[i]
skip_current = dp[i-1]
dp[i] = max(rob_current, skip_current)
print(f"i={i}: rob={rob_current}, skip={skip_current}, chosen={dp[i]}")
Step 5: Test Edge Cases Systematically
Create a test suite:
def test_house_robber():
assert rob([]) == 0 # Empty
assert rob([5]) == 5 # Single element
assert rob([1, 1]) == 1 # Two elements
assert rob([2, 1, 1, 2]) == 4 # All small values
assert rob([1, 2, 3, 1]) == 4 # Increasing
assert rob([2, 7, 9, 3, 1]) == 12 # Mixed
assert rob([1]*10000) == 5000 # Large input
print("All tests passed!")
Common Debugging Red Flags
⚠️ Red Flag 1: Your answer is close but slightly off (e.g., off by 1) → Likely cause: Off-by-one error in indexing or loop bounds
⚠️ Red Flag 2: Your answer is wildly wrong or negative → Likely cause: Base case initialization error
⚠️ Red Flag 3: Works for small inputs but crashes on large inputs → Likely cause: Stack overflow from recursion
⚠️ Red Flag 4: Wrong answer only on specific test cases → Likely cause: Missing edge case handling
⚠️ Red Flag 5: Timeout on large inputs → Likely causes: Forgot memoization, or using recursion when tabulation is needed
Final Debugging Wisdom
🧠 Mnemonic for debugging: BASED
- Base cases: Are they correct?
- Array bounds: Are all indices valid?
- State definition: Is dp[i] well-defined?
- Edge cases: Did you test empty, single, max?
- Definition match: Does your transition match your problem type?
💡 Remember: The best debugging strategy is prevention. Spend 30 seconds before coding to:
- Write down your DP state definition
- Write down your recurrence relation
- List the edge cases you'll need to handle
This upfront clarity prevents 90% of bugs. The remaining 10% you'll catch with systematic testing and the techniques we've covered.
By mastering these debugging strategies, you transform from someone who struggles with DP problems to someone who confidently identifies and fixes issues in minutes rather than hours. The key is systematic thinking—treating debugging not as random trial-and-error, but as a methodical process of hypothesis testing and verification.
Practice Strategy and Key Takeaways
Congratulations! You've journeyed through the fundamental concepts, patterns, and pitfalls of 1D Dynamic Programming. Now it's time to consolidate everything you've learned into a structured practice strategy that will transform you from understanding DP concepts to mastering them through deliberate practice. This final section provides you with actionable frameworks, templates, and progression paths to ensure your continued growth.
Curated Problem Progression: Your Practice Roadmap
The key to mastering 1D DP isn't solving hundreds of random problems—it's deliberate practice through a carefully sequenced progression. Here's your roadmap from foundational to advanced problems:
🎯 Foundation Level (Week 1-2)
Start with these problems to build pattern recognition:
🔧 Linear Sequence Problems:
- Climbing Stairs (LC 70) - The "Hello World" of DP; perfect for understanding state transitions
- Min Cost Climbing Stairs (LC 746) - Adds decision-making to the climbing pattern
- House Robber (LC 198) - Introduces the concept of non-adjacent constraints
- Fibonacci Number (LC 509) - Classic recursion to DP conversion
🔧 Simple Optimization Problems:
- Maximum Subarray (LC 53) - Kadane's algorithm; local vs. global optimization
- Best Time to Buy and Sell Stock (LC 121) - Single transaction optimization
- Divisor Game (LC 1025) - Game theory meets DP
💡 Pro Tip: Solve each foundation problem three times: first with brute force recursion, second with memoization (top-down DP), and third with tabulation (bottom-up DP). This triple-pass approach deeply ingrains the transformation process.
🎯 Intermediate Level (Week 3-4)
Once you're comfortable with basic patterns, tackle these:
🔧 Subsequence and String Problems:
- Longest Increasing Subsequence (LC 300) - Classic DP with O(n²) and O(n log n) solutions
- Word Break (LC 139) - Dictionary-based DP with string manipulation
- Decode Ways (LC 91) - Multiple decision points per position
- Palindromic Substrings (LC 647) - Expand-around-center meets DP
🔧 State Machine Problems:
- Best Time to Buy and Sell Stock with Cooldown (LC 309) - Multiple states per position
- House Robber II (LC 213) - Circular array constraint handling
- Delete and Earn (LC 740) - Transform to House Robber pattern
🤔 Did you know? The Longest Increasing Subsequence problem has been studied for over 50 years and has applications in DNA sequencing, file comparison algorithms, and even analyzing stock market trends!
🎯 Advanced Level (Week 5+)
Challenge yourself with these complex scenarios:
🔧 Multi-Constraint Problems:
- Longest Palindromic Subsequence (LC 516) - Bridging 1D to 2D thinking
- Partition Equal Subset Sum (LC 416) - 0/1 knapsack variant
- Coin Change (LC 322) - Unbounded knapsack with minimization
- Perfect Squares (LC 279) - Mathematical DP optimization
🔧 Advanced Optimization:
- Jump Game II (LC 45) - Greedy-DP hybrid
- Best Time to Buy and Sell Stock III (LC 123) - Multiple transactions with limits
- Arithmetic Slices (LC 413) - Counting with DP
Template Code Snippets: Your Starting Points
Having templates eliminates the cognitive load of starting from scratch. Here are battle-tested templates for both approaches:
Top-Down (Memoization) Template
class Solution:
def solveProblem(self, arr):
n = len(arr)
memo = {} # or [-1] * n for array-based memoization
def dp(i, state=None):
"""
Returns the optimal answer for subproblem starting at index i
state: optional parameter for problems with multiple states
"""
# Base case: smallest subproblem
if i >= n: # or i < 0, depending on direction
return base_case_value
# Check memo to avoid recomputation
key = (i, state) if state is not None else i
if key in memo:
return memo[key]
# Recurrence relation: try all possible choices
option1 = dp(i + 1, new_state) + cost1 # Skip or move forward
option2 = dp(i + 2, new_state) + cost2 # Take and jump
# ... more options as needed
# Store and return optimal choice
memo[key] = max(option1, option2) # or min(), depending on problem
return memo[key]
return dp(0) # Start from first position
💡 Mental Model: Think of top-down DP as "asking questions recursively and remembering answers." Each recursive call asks: "What's the best I can do from here?" and caches the answer.
Bottom-Up (Tabulation) Template
class Solution:
def solveProblem(self, arr):
n = len(arr)
if n == 0:
return edge_case_value
# Initialize DP table
dp = [0] * n # or (n + 1) if you need extra space
# Base case: smallest subproblem
dp[0] = base_case_value
if n > 1:
dp[1] = base_case_value_2
# Fill table iteratively from small to large subproblems
for i in range(2, n): # or range(1, n) depending on base cases
# Recurrence relation: compute from previous subproblems
option1 = dp[i - 1] + cost1 # From previous position
option2 = dp[i - 2] + cost2 # From two positions back
# ... more options as needed
dp[i] = max(option1, option2) # or min(), depending on problem
return dp[n - 1] # or dp[n], depending on indexing
💡 Mental Model: Think of bottom-up DP as "building a ladder from the ground up." You solve the smallest problems first, then use those answers as stepping stones to solve larger problems.
Space-Optimized Template
class Solution:
def solveProblemOptimized(self, arr):
n = len(arr)
if n == 0:
return edge_case_value
# Only keep track of what you need
prev2 = base_case_value # dp[i-2]
prev1 = base_case_value_2 # dp[i-1]
for i in range(2, n):
# Compute current using only previous values
current = max(prev1 + cost1, prev2 + cost2)
# Slide the window forward
prev2 = prev1
prev1 = current
return prev1 # The last computed value
🎯 Key Principle: If your recurrence only depends on the previous k states, you only need O(k) space instead of O(n). This is one of the most impactful optimizations in DP!
The 1D DP Problem-Solving Checklist
Use this systematic checklist every time you encounter a DP problem. Print it, bookmark it, internalize it:
📋 1D DP SYSTEMATIC APPROACH
□ STEP 1: RECOGNIZE THE PATTERN
├─ Does the problem ask for optimization? (max/min/count)
├─ Can we break it into overlapping subproblems?
├─ Does the answer for position i depend only on previous positions?
└─ Are we making decisions at each step that affect future choices?
□ STEP 2: DEFINE THE STATE
├─ What does dp[i] represent?
├─ What's the physical meaning of this state?
├─ Do I need additional state dimensions? (if yes, consider 2D DP)
└─ What are the valid ranges for my state?
□ STEP 3: ESTABLISH BASE CASES
├─ What are the smallest subproblems I can solve directly?
├─ What happens at boundaries? (i = 0, i = n-1, empty array)
└─ Are there edge cases? (n = 0, n = 1, all same elements)
□ STEP 4: FORMULATE RECURRENCE RELATION
├─ What choices do I have at position i?
├─ How does each choice relate to previous states?
├─ Am I maximizing or minimizing?
└─ Write it mathematically: dp[i] = ...
□ STEP 5: DETERMINE ITERATION ORDER
├─ Top-down (recursion + memo) or bottom-up (iteration)?
├─ If bottom-up: left-to-right or right-to-left?
└─ Can I compute states in the order I need them?
□ STEP 6: IMPLEMENT AND VERIFY
├─ Code the solution following template
├─ Test with small examples by hand
├─ Check boundary conditions
└─ Verify with provided test cases
□ STEP 7: OPTIMIZE
├─ Can I reduce space complexity?
├─ Can I eliminate redundant computations?
└─ Are there special cases that allow shortcuts?
⚠️ Common Mistake: Skipping Step 2 (defining state clearly) is the #1 reason solutions fail. Always write down explicitly: "dp[i] means [the exact answer to which subproblem]" before coding! ⚠️
Time and Space Complexity Analysis Patterns
Understanding complexity is crucial for interviews and real-world applications. Here are the patterns for 1D DP:
📋 Quick Reference Card: 1D DP Complexity Patterns
| Pattern | Time Complexity | Space Complexity (standard) | Space Complexity (optimized) | Example Problem |
|---|---|---|---|---|
| 🔧 Linear scan with constant lookback | O(n) | O(n) | O(1) | Climbing Stairs, House Robber |
| 🔧 Linear scan with inner loop to i | O(n²) | O(n) | O(n) | Longest Increasing Subsequence |
| 🔧 Linear scan with inner loop to k | O(n·k) | O(n) | O(n) | Coin Change, Jump Game |
| 🔧 String/array with nested iteration | O(n²) | O(n) | O(1)* | Palindromic Substrings |
| 🔧 State machine (m states) | O(n·m) | O(n·m) | O(m) | Stock with Cooldown |
| 🔧 Binary search optimization | O(n log n) | O(n) | O(n) | LIS (binary search variant) |
*Some problems allow constant space through clever state tracking
🎯 Analysis Framework:
For Time Complexity:
- Count the number of states: How many unique subproblems exist? Usually O(n) for 1D
- Analyze work per state: What does the recurrence relation do?
- Constant operations: O(1)
- Loop through previous elements: O(n) or O(k)
- Binary search: O(log n)
- Multiply: Total time = states × work per state
For Space Complexity:
- Memoization/Table size: How many states do you store? Usually O(n)
- Recursion depth (top-down only): Maximum call stack depth, usually O(n)
- Optimization potential: If recurrence only looks back k steps, optimize to O(k)
💡 Pro Tip: In interviews, always mention both the initial solution complexity and potential optimizations. For example: "The standard bottom-up approach is O(n) time and O(n) space, but since we only need the previous two values, we can optimize space to O(1)."
Transition Guide: From 1D to 2D DP
You're ready to tackle 2D DP problems when you can confidently:
✅ Readiness Checklist:
- Solve any 1D DP problem from the intermediate list in under 30 minutes
- Explain the difference between top-down and bottom-up without reference
- Identify the recurrence relation in an unfamiliar problem within 5 minutes
- Optimize space complexity without looking at solutions
- Debug state transition errors systematically
🎯 Bridge Problems (These help transition from 1D to 2D thinking):
- Unique Paths (LC 62) - Simple 2D grid, but can think of it as "1D + 1D"
- Minimum Path Sum (LC 64) - Extends Maximum Subarray to 2D
- Longest Palindromic Subsequence (LC 516) - Uses 2D DP but builds on 1D string patterns
- Edit Distance (LC 72) - Classic 2D DP comparing two sequences
Key Differences to Expect:
1D DP → 2D DP Evolution
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
State: dp[i] → dp[i][j]
Input: one array → two arrays/2D grid
Base: one point → one row + one column
Space: O(n) → O(n·m) or O(min(n,m))
Recurrence: 1-2 prev→ up to 4+ previous states
❌ Wrong thinking: "2D DP is completely different from 1D DP—I need to relearn everything."
✅ Correct thinking: "2D DP extends 1D DP principles. I still define states, establish base cases, and build recurrence relations—just with one more dimension to consider."
Summary: What You Now Understand
When you started this lesson, 1D Dynamic Programming might have seemed like an impenetrable wall of recursion, memoization, and optimization. Now, you have a comprehensive mental framework:
📋 Knowledge Transformation Summary
| Concept | Before This Lesson | After This Lesson |
|---|---|---|
| 🧠 DP Recognition | "Is this even a DP problem?" | Systematic checklist to identify DP patterns |
| 🎯 State Definition | Vague understanding of "states" | Clear framework: dp[i] = [specific subproblem answer] |
| 🔧 Solution Approach | Only knew one method (usually recursion) | Master both top-down and bottom-up, know when to use each |
| 📊 Problem Patterns | Saw each problem as unique | Recognize 6+ core patterns (linear, subsequence, state machine, etc.) |
| ⚡ Optimization | Never considered space optimization | Automatically identify O(n) → O(1) opportunities |
| 🐛 Debugging | Random trial-and-error | Systematic debugging: base cases → transitions → boundaries |
| 📈 Complexity Analysis | Rough guesses | Precise framework: count states × work per state |
| 🎓 Practice Strategy | Random LeetCode grinding | Structured progression from foundation to advanced |
⚠️ Final Critical Points to Remember:
⚠️ Always define your state explicitly in words before coding. Write it down: "dp[i] represents [exact meaning]."
⚠️ The recurrence relation is the heart of DP. If you can't clearly articulate how dp[i] relates to previous states, you're not ready to code.
⚠️ Base cases aren't optional details—they're the foundation. Get them wrong, and everything crumbles. Always test i=0, i=1, and empty inputs.
⚠️ Space optimization should come AFTER correctness. First make it work with full O(n) space, then optimize. Premature optimization causes bugs.
⚠️ Top-down and bottom-up are equivalent in power but different in debugging. Top-down is easier to formulate (think recursively), bottom-up is easier to optimize (no recursion overhead).
Practical Applications and Next Steps
Dynamic Programming isn't just an interview skill—it's a powerful problem-solving paradigm with real-world applications:
💡 Real-World Example:
- Bioinformatics: DNA sequence alignment uses DP (edit distance algorithms) to find genetic similarities
- Finance: Portfolio optimization and option pricing models rely on DP for multi-stage decision-making
- Text Prediction: Autocomplete and spell-check systems use DP for finding closest word matches
- Resource Allocation: Cloud computing schedulers use DP to optimize task distribution and minimize costs
- Game Development: AI pathfinding and decision trees in strategy games employ DP for optimal move calculation
🎯 Your Next Steps:
Week 1-2: Foundation Mastery
- Solve all 7 foundation problems listed earlier
- Implement each problem in both top-down and bottom-up
- Write explanations in your own words for each solution
Week 3-4: Pattern Recognition
- Tackle intermediate problems
- Before looking at hints, spend 15 minutes identifying which pattern applies
- Create a personal "pattern library" with notes on when to use each
Week 5+: Advanced Techniques
- Attempt advanced problems
- Start contributing to discussion forums—explaining solutions deepens understanding
- Begin exploring 2D DP bridge problems
🧠 Mnemonic for DP Problem-Solving: "SDBRE"
- State: Define what dp[i] means
- Decision: What choices do I have at each step?
- Base: What are the smallest subproblems?
- Recurrence: How do states relate?
- Execute: Code and verify
Final Motivation:
Dynamic Programming has a reputation for being difficult, but you've now seen it's actually a systematic, learnable skill. The problems that once seemed impossible are now puzzles with a clear solution pathway. Every master was once a beginner who refused to give up.
Your journey doesn't end here—it evolves. Take the templates, checklists, and strategies from this lesson and apply them deliberately. Track your progress. Celebrate small wins. When you solve your first DP problem independently, without hints, you'll experience one of the most satisfying moments in your coding journey.
The patterns you've learned here—breaking problems into subproblems, recognizing optimal substructure, avoiding redundant computation—transcend Dynamic Programming. They're fundamental computer science principles that will serve you in system design, optimization problems, and algorithmic thinking across your entire career.
Now go practice. Your future self, confidently solving DP problems in interviews and building optimized systems in production, will thank you for the deliberate effort you invest today.
🎯 Remember: Consistency beats intensity. Solve one DP problem every day for 30 days, and you'll be in the top 10% of developers who truly understand Dynamic Programming.