Graph Algorithms & Search
Navigate complex relationships using BFS, DFS, and shortest path algorithms
Introduction to Graph Algorithms & Search
Have you ever wondered how LinkedIn suggests connections you might know, or how Google Maps finds the fastest route through a maze of city streets? The answer lies in graph algorithms—one of the most powerful and prevalent problem-solving techniques in computer science. If you're preparing for technical interviews, understanding graphs isn't just helpful; it's essential. These problems appear in roughly 30-40% of medium to hard LeetCode questions, making them the single most important category to master after arrays and strings. The good news? Once you understand the core patterns, you'll find yourself recognizing graph problems everywhere—and our free flashcards throughout this lesson will help cement these patterns in your memory.
But why do tech companies care so much about graph search algorithms? The reason is simple: graphs model relationships, and relationships are at the heart of nearly every complex system we build. When Facebook needs to suggest friends, it's traversing a social graph. When Amazon recommends products, it's analyzing a graph of purchase behaviors. When your IDE detects circular dependencies in your code, it's running cycle detection on a dependency graph. Mastering graph algorithms doesn't just help you pass interviews—it fundamentally changes how you think about problems.
Why Graphs Dominate Technical Interviews
Let's address the elephant in the room: why do interviewers love graph problems so much? The answer reveals something important about what these interviews are really testing.
🎯 Key Principle: Graph problems test multiple skills simultaneously—problem decomposition, recursion, state management, and algorithmic thinking—making them ideal interview questions.
When you solve a graph problem, you're demonstrating:
🧠 Pattern Recognition - Can you identify that a seemingly unrelated problem is actually a graph problem in disguise?
🔧 Implementation Skills - Can you translate a high-level algorithm into clean, bug-free code?
🎯 Trade-off Analysis - Can you articulate why you chose DFS over BFS, or adjacency list over adjacency matrix?
📚 Edge Case Handling - Can you handle disconnected graphs, cycles, and empty inputs gracefully?
Consider this: when an interviewer asks you to find if a course schedule is possible given prerequisites, they're not really asking about course scheduling. They're asking if you can recognize this as a topological sort problem on a directed acyclic graph (DAG), implement cycle detection, and explain your reasoning—all under pressure.
💡 Real-World Example: Airbnb frequently asks graph problems because their platform is fundamentally about connections—guests to hosts, locations to amenities, reviews to credibility scores. A candidate who deeply understands graphs understands the structure of their business.
Graphs in the Real World: Beyond the Interview
While passing interviews is important, understanding where graphs appear in production systems will make you a better engineer and help you recognize graph problems intuitively.
Social Networks are perhaps the most obvious example. Every social platform—Facebook, LinkedIn, Twitter, Instagram—is built on a graph where users are nodes and relationships are edges. When you see "People you may know," you're witnessing graph traversal algorithms finding nodes at distance 2 or 3 from you. When you see "6 degrees of separation," that's breadth-first search (BFS) finding the shortest path between any two people.
Routing and Navigation systems rely heavily on graphs. Every intersection is a node, every road is an edge with a weight (travel time or distance). Google Maps doesn't just use simple graph search—it uses sophisticated variants like A* search and Dijkstra's algorithm, but they're all built on the fundamental BFS and DFS principles you'll master in this lesson.
Dependency Resolution is everywhere in software engineering. When npm installs packages, it's resolving a dependency graph. When your build system determines compilation order, it's performing topological sort. When Docker starts containers, it's analyzing a graph of dependencies. Understanding graphs means understanding how modern software is actually built.
Web Crawling and Indexing treat the internet itself as a massive graph where web pages are nodes and hyperlinks are directed edges. Google's PageRank algorithm, which revolutionized search, is fundamentally a graph algorithm. When you implement a web crawler in an interview, you're using BFS or DFS to traverse this graph.
🤔 Did you know? The original PageRank algorithm treats the web as a graph with 50+ billion nodes and performs iterative traversals to calculate importance scores. The same BFS principles you'll learn work at both small and massive scales.
Understanding Graph Representations: The Foundation
Before we can search graphs effectively, we need to understand how to represent them in code. Your choice of representation has profound implications for both time and space complexity, and interviewers often ask you to justify your choice.
A graph consists of:
- Vertices (or nodes): The entities in your system
- Edges: The connections between entities
- Weights (optional): Values associated with edges
- Direction (optional): Whether edges are one-way or bidirectional
There are two primary ways to represent graphs in code: adjacency lists and adjacency matrices. Let's explore both with concrete examples.
Adjacency List Representation
An adjacency list stores, for each node, a list of all nodes it connects to. This is typically implemented as a hash map (dictionary) where keys are nodes and values are lists of neighbors.
## Adjacency List for an undirected graph
## Graph: 0 --- 1
## | |
## 2 --- 3
graph = {
0: [1, 2], # Node 0 connects to nodes 1 and 2
1: [0, 3], # Node 1 connects to nodes 0 and 3
2: [0, 3], # Node 2 connects to nodes 0 and 3
3: [1, 2] # Node 3 connects to nodes 1 and 2
}
## For directed graphs, edges only go one way
digraph = {
0: [1, 2], # 0 points to 1 and 2
1: [3], # 1 points to 3
2: [3], # 2 points to 3
3: [] # 3 points nowhere
}
## Weighted graphs store tuples (neighbor, weight)
weighted_graph = {
0: [(1, 4), (2, 1)], # 0->1 with weight 4, 0->2 with weight 1
1: [(3, 2)], # 1->3 with weight 2
2: [(3, 5)], # 2->3 with weight 5
3: []
}
💡 Pro Tip: In LeetCode problems, you'll often need to build the adjacency list yourself from an edge list. For example, edges = [[0,1], [0,2], [1,3], [2,3]] would be converted to the adjacency list above.
Adjacency Matrix Representation
An adjacency matrix uses a 2D array where matrix[i][j] indicates whether there's an edge from node i to node j. For weighted graphs, the cell stores the weight; for unweighted graphs, it stores 1 (edge exists) or 0 (no edge).
## Adjacency Matrix for the same graph
## Graph: 0 --- 1
## | |
## 2 --- 3
## 4 nodes, so 4x4 matrix
matrix = [
[0, 1, 1, 0], # Row 0: edges from node 0
[1, 0, 0, 1], # Row 1: edges from node 1
[1, 0, 0, 1], # Row 2: edges from node 2
[0, 1, 1, 0] # Row 3: edges from node 3
]
## Check if edge exists from node 0 to node 1
has_edge = matrix[0][1] == 1 # True
## For weighted graphs, use weights instead of 1
## Use infinity or -1 to indicate no edge
weighted_matrix = [
[0, 4, 1, inf],
[inf, 0, inf, 2],
[inf, inf, 0, 5],
[inf, inf, inf, 0]
]
Complexity Trade-offs: When to Use Each
The choice between adjacency list and adjacency matrix depends on your graph's properties and the operations you need to perform.
📋 Quick Reference Card: Representation Complexity
| Operation | 📊 Adjacency List | 📊 Adjacency Matrix |
|---|---|---|
| 🔍 Check if edge exists | O(degree of node) | O(1) |
| 🔄 Iterate all edges | O(V + E) | O(V²) |
| 💾 Space complexity | O(V + E) | O(V²) |
| ➕ Add edge | O(1) | O(1) |
| ➖ Remove edge | O(degree) | O(1) |
Where V = number of vertices (nodes), E = number of edges
✅ Correct thinking: "This graph has 10,000 nodes but each node only connects to 3-5 others (sparse graph). I'll use an adjacency list to save space."
❌ Wrong thinking: "Matrices are simpler to code, so I'll always use them." (This costs O(V²) space even for sparse graphs!)
🎯 Key Principle: For most LeetCode problems, use adjacency lists. Real-world graphs are typically sparse (few edges relative to possible edges), making adjacency lists both faster and more space-efficient.
💡 Mental Model: Think of an adjacency list as your phone's contact list—you only store the people you actually know. An adjacency matrix is like a table with every human on Earth where you mark who you know—mostly empty, but fast to check any specific person.
Core Search Strategies: DFS vs BFS
Now that we can represent graphs, let's preview the two fundamental ways to explore them: Depth-First Search (DFS) and Breadth-First Search (BFS). These aren't just algorithms—they're ways of thinking about exploration and discovery.
Depth-First Search: The Deep Diver
DFS explores as far as possible along one branch before backtracking. Imagine you're in a maze: DFS means picking a path and following it until you hit a dead end, then backing up to the last intersection and trying a different path.
def dfs_recursive(graph, node, visited=None):
"""Recursive DFS - explores deeply before backtracking"""
if visited is None:
visited = set()
visited.add(node) # Mark current node as visited
print(f"Visiting: {node}")
# Recursively visit all unvisited neighbors
for neighbor in graph[node]:
if neighbor not in visited:
dfs_recursive(graph, neighbor, visited)
return visited
## Example usage:
graph = {
0: [1, 2],
1: [0, 3, 4],
2: [0],
3: [1],
4: [1]
}
## Starting from node 0, might visit: 0 -> 1 -> 3 -> 4 -> 2
## (goes deep into branch 0->1->3 before exploring 4 or 2)
dfs_recursive(graph, 0)
🧠 Mnemonic: DFS = "Dive First, Surface later" - like a scuba diver exploring a deep trench before moving to the next one.
Breadth-First Search: The Level Explorer
BFS explores all neighbors at the current depth before moving to the next level. It's like ripples in a pond—you explore everything at distance 1, then distance 2, then distance 3.
from collections import deque
def bfs(graph, start):
"""BFS explores level-by-level using a queue"""
visited = set([start])
queue = deque([start]) # Queue for BFS (FIFO)
while queue:
node = queue.popleft() # Get next node to explore
print(f"Visiting: {node}")
# Add all unvisited neighbors to queue
for neighbor in graph[node]:
if neighbor not in visited:
visited.add(neighbor)
queue.append(neighbor)
return visited
## Using same graph as DFS example
## Starting from node 0, visits: 0 -> 1 -> 2 -> 3 -> 4
## (explores all distance-1 nodes [1,2] before distance-2 nodes [3,4])
bfs(graph, 0)
🧠 Mnemonic: BFS = "Breadth Before Depth" - like exploring all rooms on one floor before taking the stairs to the next floor.
When to Use DFS vs BFS
Choosing between DFS and BFS is one of the most important pattern recognition skills for LeetCode success.
Use DFS when:
- 🎯 Finding any path or solution (not necessarily shortest)
- 🎯 Exploring all possibilities (backtracking problems)
- 🎯 Detecting cycles in a graph
- 🎯 Implementing topological sort
- 🎯 Memory is limited (DFS uses O(height) space vs BFS's O(width))
- 🎯 The solution is likely to be far from the start
Use BFS when:
- 🎯 Finding the shortest path in an unweighted graph
- 🎯 Level-order operations ("find all nodes at distance k")
- 🎯 The solution is likely to be close to the start
- 🎯 Finding minimum number of steps to reach a goal
- 🎯 Exploring layer by layer matters to the problem
💡 Pro Tip: If the problem mentions "shortest," "minimum steps," or "closest," your instinct should be BFS. If it mentions "all possible paths," "any valid path," or "detect cycles," think DFS.
⚠️ Common Mistake 1: Using DFS for shortest path problems. DFS finds a path, but not necessarily the shortest path in an unweighted graph. BFS guarantees shortest path because it explores level-by-level. ⚠️
Example: Finding shortest path from A to D
Graph: A --- B --- D
| |
C ----------+
DFS might find: A -> C -> D (2 edges)
BFS always finds: A -> B -> D (2 edges)
But if graph was:
Graph: A --- B --- D
| |
C ----+
DFS might find: A -> C -> B -> D (3 edges)
BFS always finds: A -> B -> D (2 edges) ✓
Recognizing Graph Problems in Disguise
One of the most valuable skills you'll develop is recognizing when a problem is actually a graph problem, even when it doesn't mention graphs at all. Many LeetCode problems are graph problems wearing clever disguises.
Classic Graph Problem Disguises:
🔒 Dependencies - "Given course prerequisites, determine if it's possible to complete all courses"
- Reality: This is cycle detection in a directed graph
- Nodes: Courses
- Edges: Prerequisites (course A depends on course B = edge from B to A)
🔒 Islands and Regions - "Count the number of islands in a 2D grid"
- Reality: This is connected component counting
- Nodes: Each cell in the grid
- Edges: Adjacent cells (up, down, left, right)
🔒 Transformation Problems - "Minimum steps to transform word 'hit' to 'cog'"
- Reality: This is shortest path in an implicit graph
- Nodes: Each word
- Edges: Words that differ by one letter
🔒 State Exploration - "Find minimum moves to solve a puzzle"
- Reality: This is BFS through a state space
- Nodes: Each possible puzzle state
- Edges: Valid moves
🤔 Did you know? The famous "Six Degrees of Kevin Bacon" game is actually computing shortest paths in the actor collaboration graph using BFS. The same algorithm works for finding the minimum number of word changes to transform "hit" into "cog"!
💡 Mental Model: When you see a problem asking about connections, relationships, paths, or dependencies, mentally translate it into graph terminology:
- What are my nodes?
- What are my edges?
- Do I need DFS or BFS?
- Should I use adjacency list or matrix?
This translation is often the hardest and most important step. Once you see the graph structure, the algorithm becomes obvious.
Visualization: How DFS and BFS Differ
Let's visualize how these algorithms explore the same graph differently. Understanding this difference viscerally will help you choose the right tool.
Graph Structure:
1
/ \
2 3
/ \ \
4 5 6
/
7
DFS Traversal (starting from 1):
Visit order: 1 -> 2 -> 4 -> 7 -> 5 -> 3 -> 6
Step-by-step:
1: Start at 1
↓ go deep
2: Visit 2 (child of 1)
↓ go deep
4: Visit 4 (child of 2)
↓ go deep
7: Visit 7 (child of 4)
← backtrack (no children)
5: Visit 5 (sibling of 4)
← backtrack (done with 2's subtree)
3: Visit 3 (sibling of 2)
↓ go deep
6: Visit 6 (child of 3)
BFS Traversal (starting from 1):
Visit order: 1 -> 2 -> 3 -> 4 -> 5 -> 6 -> 7
Step-by-step:
Level 0: [1]
Level 1: [2, 3] (all children of level 0)
Level 2: [4, 5, 6] (all children of level 1)
Level 3: [7] (all children of level 2)
Notice how DFS goes "1 → 2 → 4 → 7" immediately (diving deep), while BFS goes "1 → 2 → 3" (exploring the current level completely) before moving deeper.
🎯 Key Principle: DFS uses a stack (or recursion, which uses the call stack), while BFS uses a queue. This fundamental difference in data structure drives their different exploration patterns.
DFS: Stack (LIFO - Last In, First Out)
- Push node 1
- Push children [2, 3]
- Pop 3 (last in)
- Push children of 3...
Result: Goes deep quickly
BFS: Queue (FIFO - First In, First Out)
- Enqueue node 1
- Enqueue children [2, 3]
- Dequeue 2 (first in)
- Enqueue children of 2...
Result: Explores level by level
Building Your Graph Intuition
As we conclude this introduction, here are the core principles to internalize:
1. Graph problems are everywhere - About 30-40% of medium-hard LeetCode problems are graph problems, often in disguise. Train yourself to see the graph structure.
2. Representation matters - Choose adjacency lists for sparse graphs (most real-world cases) and adjacency matrices only when you need O(1) edge lookup or the graph is dense.
3. DFS vs BFS is not arbitrary - DFS for exploration and backtracking, BFS for shortest paths and level-order. This choice often makes the difference between an elegant solution and a tangled mess.
4. Master the templates - Both DFS and BFS have standard templates that work for 90% of problems. We'll explore these in depth in the next sections.
5. Practice translation - The hardest part is often recognizing that a problem is a graph problem and translating it into nodes and edges.
💡 Remember: Every expert was once a beginner who struggled with their first graph problem. The key is deliberate practice: solve problems, recognize patterns, and build your mental library of "I've seen this before."
In the upcoming sections, we'll dive deep into DFS and BFS implementations, explore battle-tested templates for LeetCode success, examine common pitfalls, and build your confidence with real problem patterns. By the end of this lesson, graph problems will transform from intimidating puzzles into opportunities to showcase your skills.
Let's begin mastering these fundamental algorithms that will unlock countless interview problems and real-world solutions.
Depth-First Search (DFS): Concepts and Implementation
Imagine you're exploring a vast maze. At each intersection, you pick a path and follow it as far as possible before backtracking to try another route. This intuitive exploration strategy is precisely how Depth-First Search (DFS) navigates through graphs. DFS is one of the most fundamental graph traversal algorithms, and mastering it will unlock solutions to countless LeetCode problems.
Understanding the DFS Exploration Strategy
Depth-First Search explores a graph by diving as deeply as possible along each branch before backtracking. The algorithm follows a simple principle: when you visit a node, immediately explore one of its unvisited neighbors, then explore that neighbor's unvisited neighbors, and so on. Only when you reach a dead end (a node with no unvisited neighbors) do you backtrack to explore alternative paths.
🎯 Key Principle: DFS prioritizes depth over breadth. It explores vertically down branches before exploring horizontally across siblings.
Let's visualize this with a simple graph:
A
/ \
B C
/ \ \
D E F
Starting from node A, a DFS traversal might visit nodes in this order: A → B → D → E → C → F. Notice how we fully explore the left subtree (B and its children) before moving to the right side (C and F).
Step-by-step DFS exploration:
Step 1: [A] Visit A, mark as visited
Step 2: [A, B] Go deep: visit B
Step 3: [A, B, D] Go deeper: visit D
Step 4: [A, B] Backtrack: D has no unvisited neighbors
Step 5: [A, B, E] Visit E (B's other child)
Step 6: [A, B] Backtrack from E
Step 7: [A] Backtrack from B (fully explored)
Step 8: [A, C] Visit C (A's other child)
Step 9: [A, C, F] Visit F
Step 10: Done All nodes visited
💡 Mental Model: Think of DFS as a stack of function calls or a stack data structure. You push new discoveries onto the stack and pop when you've exhausted all options at the current depth.
The Mechanics: Stack-Based Exploration and Backtracking
The magic of DFS lies in its use of a stack (either implicit through recursion or explicit through a data structure). This stack naturally handles the backtracking behavior that defines DFS.
When we visit a node, we:
- Mark it as visited to avoid infinite loops
- Process the node (print it, check a condition, etc.)
- Recursively explore each unvisited neighbor
- Automatically backtrack when all neighbors are explored
The stack maintains our exploration history. When we hit a dead end, the stack "remembers" where we came from, allowing us to backtrack and try unexplored paths.
Recursive Implementation: The Elegant Approach
The recursive implementation of DFS is remarkably elegant because the call stack implicitly handles our backtracking needs. Here's a comprehensive implementation in Python:
def dfs_recursive(graph, start, visited=None):
"""
Performs DFS traversal on a graph using recursion.
Args:
graph: Dictionary where keys are nodes and values are lists of neighbors
start: Starting node for traversal
visited: Set of already visited nodes (for tracking)
Returns:
List of nodes in DFS order
"""
if visited is None:
visited = set()
result = []
# Base case: if node already visited, return empty
if start in visited:
return result
# Mark current node as visited
visited.add(start)
result.append(start)
# Recursively visit all unvisited neighbors
for neighbor in graph[start]:
if neighbor not in visited:
result.extend(dfs_recursive(graph, neighbor, visited))
return result
## Example usage:
graph = {
'A': ['B', 'C'],
'B': ['D', 'E'],
'C': ['F'],
'D': [],
'E': [],
'F': []
}
print(dfs_recursive(graph, 'A')) # Output: ['A', 'B', 'D', 'E', 'C', 'F']
Let's break down what happens at each recursive call:
- Visit node A: Add 'A' to visited, add to result, then recursively call on neighbors B and C
- Visit node B: Add 'B' to visited, add to result, recursively call on D and E
- Visit node D: Add 'D' to visited, add to result, no neighbors → return
- Back to B: Continue with neighbor E
- Visit node E: Add 'E' to visited, add to result, no neighbors → return
- Back to B: All neighbors explored → return
- Back to A: Continue with neighbor C
- Visit node C: Add 'C' to visited, add to result, recursively call on F
- Visit node F: Add 'F' to visited, add to result, no neighbors → return
- Back to C: All neighbors explored → return
- Back to A: All neighbors explored → done
💡 Pro Tip: The recursive approach is ideal for problems involving tree traversals, finding paths, and detecting cycles. The code is clean and mirrors the natural recursive structure of many graph problems.
⚠️ Common Mistake 1: Forgetting to pass the visited set through recursive calls, causing each branch to restart the visited tracking. Always maintain state across recursive calls! ⚠️
Iterative Implementation: Making the Stack Explicit
While recursion is elegant, sometimes we need an iterative approach to avoid stack overflow on deep graphs or when the problem explicitly requires it. Here's how to implement DFS iteratively using an explicit stack:
def dfs_iterative(graph, start):
"""
Performs DFS traversal using an explicit stack (iterative approach).
Args:
graph: Dictionary where keys are nodes and values are lists of neighbors
start: Starting node for traversal
Returns:
List of nodes in DFS order
"""
visited = set()
stack = [start] # Initialize stack with starting node
result = []
while stack:
# Pop a node from the stack
node = stack.pop()
# Skip if already visited
if node in visited:
continue
# Mark as visited and process
visited.add(node)
result.append(node)
# Add all unvisited neighbors to stack
# Note: We add in reverse order to maintain left-to-right traversal
for neighbor in reversed(graph[node]):
if neighbor not in visited:
stack.append(neighbor)
return result
## Example usage with the same graph:
print(dfs_iterative(graph, 'A')) # Output: ['A', 'B', 'D', 'E', 'C', 'F']
The iterative version explicitly manages what recursion handles automatically:
Stack visualization during execution:
Initial: [A]
Pop A: [C, B] (neighbors added in reverse)
Pop B: [C, E, D] (B's neighbors added)
Pop D: [C, E] (D has no neighbors)
Pop E: [C] (E has no neighbors)
Pop C: [F] (C's neighbor added)
Pop F: [] (F has no neighbors)
Done!
🤔 Did you know? The iterative version can actually be more memory-efficient for wide graphs because it doesn't accumulate recursive call frames. However, for very deep graphs, both approaches use O(V) space in the worst case.
Java Implementation for Interview Settings
Since many technical interviews involve Java, here's a robust implementation that handles both directed and undirected graphs represented as adjacency lists:
import java.util.*;
public class DFSTraversal {
// Recursive DFS
public List<Integer> dfsRecursive(Map<Integer, List<Integer>> graph, int start) {
List<Integer> result = new ArrayList<>();
Set<Integer> visited = new HashSet<>();
dfsHelper(graph, start, visited, result);
return result;
}
private void dfsHelper(Map<Integer, List<Integer>> graph,
int node,
Set<Integer> visited,
List<Integer> result) {
// Mark current node as visited
visited.add(node);
result.add(node);
// Visit all unvisited neighbors
if (graph.containsKey(node)) {
for (int neighbor : graph.get(node)) {
if (!visited.contains(neighbor)) {
dfsHelper(graph, neighbor, visited, result);
}
}
}
}
// Iterative DFS
public List<Integer> dfsIterative(Map<Integer, List<Integer>> graph, int start) {
List<Integer> result = new ArrayList<>();
Set<Integer> visited = new HashSet<>();
Stack<Integer> stack = new Stack<>();
stack.push(start);
while (!stack.isEmpty()) {
int node = stack.pop();
if (visited.contains(node)) {
continue;
}
visited.add(node);
result.add(node);
// Add neighbors in reverse order to maintain traversal order
if (graph.containsKey(node)) {
List<Integer> neighbors = graph.get(node);
for (int i = neighbors.size() - 1; i >= 0; i--) {
if (!visited.contains(neighbors.get(i))) {
stack.push(neighbors.get(i));
}
}
}
}
return result;
}
}
Time and Space Complexity Analysis
Understanding the complexity of DFS is crucial for interview discussions and choosing the right algorithm.
Time Complexity: O(V + E)
Where V is the number of vertices (nodes) and E is the number of edges.
- We visit each vertex exactly once: O(V)
- For each vertex, we examine all its edges: O(E) total across all vertices
- Combined: O(V + E)
💡 Remember: In an adjacency list representation, we process each edge exactly once (or twice for undirected graphs, but still O(E)).
Space Complexity: O(V)
The space complexity comes from:
- Visited tracking: O(V) for the set/array marking visited nodes
- Recursion stack (recursive) or explicit stack (iterative): O(V) in the worst case (think of a linear chain of nodes)
- Total: O(V)
⚠️ Common Mistake 2: Assuming DFS is always O(V²). This only happens with an adjacency matrix representation where checking all neighbors of all vertices takes V² time. With adjacency lists (standard for LeetCode), it's O(V + E). ⚠️
📋 Quick Reference Card: Complexity Summary
| 📊 Aspect | 🔢 Complexity | 📝 Notes |
|---|---|---|
| ⏱️ Time | O(V + E) | Visit each vertex once, examine each edge once |
| 💾 Space (recursive) | O(V) | Call stack depth + visited set |
| 💾 Space (iterative) | O(V) | Explicit stack + visited set |
| 🎯 Best case time | O(V) | When graph has minimal edges |
| ⚠️ Worst case space | O(V) | Linear chain or star graph |
Common DFS Patterns in LeetCode Problems
DFS is the foundation for solving numerous graph problems. Let's explore the most common patterns you'll encounter:
Pattern 1: Cycle Detection
Problem: Determine if a directed graph contains a cycle.
DFS Strategy: Use three states for each node:
- Unvisited (white): Not yet explored
- Visiting (gray): Currently in the DFS path (on the stack)
- Visited (black): Fully explored
If we encounter a gray node during traversal, we've found a cycle (we've reached a node that's still on our current path).
def has_cycle(graph):
"""
Detects if a directed graph has a cycle using DFS.
Returns True if cycle exists, False otherwise.
"""
WHITE, GRAY, BLACK = 0, 1, 2
color = {node: WHITE for node in graph}
def dfs(node):
if color[node] == GRAY: # Back edge found - cycle!
return True
if color[node] == BLACK: # Already fully explored
return False
color[node] = GRAY # Mark as currently exploring
for neighbor in graph[node]:
if dfs(neighbor):
return True
color[node] = BLACK # Mark as fully explored
return False
# Check all nodes (graph might be disconnected)
for node in graph:
if color[node] == WHITE:
if dfs(node):
return True
return False
💡 Pro Tip: This three-color approach is essential for detecting cycles in directed graphs. For undirected graphs, you only need to track if you're visiting a node that's not your immediate parent.
Pattern 2: Path Finding
Problem: Find a path between two nodes or find all paths.
DFS Strategy: Maintain the current path as you traverse, and when you reach the target, record the path. Backtrack by removing nodes from the path.
def find_all_paths(graph, start, end, path=None):
"""
Finds all possible paths from start to end using DFS.
"""
if path is None:
path = []
path = path + [start] # Add current node to path
if start == end:
return [path] # Found a complete path
paths = []
for neighbor in graph[start]:
if neighbor not in path: # Avoid cycles
new_paths = find_all_paths(graph, neighbor, end, path)
paths.extend(new_paths)
return paths
## Example: Find all paths from 'A' to 'F'
graph = {
'A': ['B', 'C'],
'B': ['D', 'E'],
'C': ['F'],
'D': ['F'],
'E': ['F'],
'F': []
}
print(find_all_paths(graph, 'A', 'F'))
## Output: [['A', 'B', 'D', 'F'], ['A', 'B', 'E', 'F'], ['A', 'C', 'F']]
Pattern 3: Connected Components
Problem: Count the number of connected components in an undirected graph.
DFS Strategy: For each unvisited node, start a DFS (which explores one complete component), then increment the component counter.
def count_components(n, edges):
"""
Counts connected components in an undirected graph.
n: number of nodes (labeled 0 to n-1)
edges: list of [u, v] pairs
"""
# Build adjacency list
graph = {i: [] for i in range(n)}
for u, v in edges:
graph[u].append(v)
graph[v].append(u)
visited = set()
def dfs(node):
visited.add(node)
for neighbor in graph[node]:
if neighbor not in visited:
dfs(neighbor)
components = 0
for node in range(n):
if node not in visited:
dfs(node) # Explore entire component
components += 1
return components
Pattern 4: Topological Sort
Problem: Order tasks with dependencies (Directed Acyclic Graph).
DFS Strategy: Perform DFS and add nodes to result in post-order (after exploring all descendants). Reverse the result for topological order.
def topological_sort(graph):
"""
Returns nodes in topological order using DFS.
Only works on DAGs (Directed Acyclic Graphs).
"""
visited = set()
result = []
def dfs(node):
visited.add(node)
for neighbor in graph[node]:
if neighbor not in visited:
dfs(neighbor)
# Add to result AFTER visiting all descendants (post-order)
result.append(node)
for node in graph:
if node not in visited:
dfs(node)
return result[::-1] # Reverse for topological order
## Example: Course prerequisites
graph = {
'CS101': [],
'CS102': ['CS101'],
'CS201': ['CS102'],
'CS202': ['CS102'],
'CS301': ['CS201', 'CS202']
}
print(topological_sort(graph))
## Output: ['CS101', 'CS102', 'CS201', 'CS202', 'CS301']
🎯 Key Principle: The post-order traversal in topological sort ensures that when we add a node, all its dependencies have already been added.
Visited Tracking Strategies
Different problems require different approaches to tracking visited nodes. Choosing the right strategy can significantly impact both performance and code clarity.
Strategy 1: Set-Based Tracking (Most Common)
✅ Advantages:
- O(1) lookup time
- Clean and readable
- Works with any node type (strings, objects, etc.)
❌ Disadvantages:
- Extra O(V) space
- Requires hashable node types
visited = set()
visited.add(node)
if node in visited: # O(1) lookup
# ...
Strategy 2: Array-Based Tracking (When Nodes are Integers)
✅ Advantages:
- Slightly faster than sets
- Better cache locality
- Perfect for numbered nodes (0 to n-1)
❌ Disadvantages:
- Only works with integer node labels
- Requires knowing the number of nodes in advance
visited = [False] * n # n is number of nodes
visited[node] = True
if visited[node]: # O(1) lookup
# ...
Strategy 3: In-Place Marking (Space-Optimized)
✅ Advantages:
- O(1) extra space
- Faster for certain problems
❌ Disadvantages:
- Modifies input (not always allowed)
- Must be able to restore original values
- Limited to specific problem types (matrices, etc.)
## Example: Grid traversal
grid[row][col] = '#' # Mark as visited
## Later, restore if needed:
grid[row][col] = original_value
⚠️ Common Mistake 3: Using a list to track visited nodes and checking if node in visited_list. This is O(n) lookup time and will cause Time Limit Exceeded (TLE) on large graphs! Always use a set or boolean array. ⚠️
💡 Pro Tip: For LeetCode problems where nodes are numbered 0 to n-1, prefer boolean arrays. For problems with arbitrary node types (strings, custom objects), use sets. For grid problems, consider in-place marking if allowed.
Recognizing When to Use DFS
DFS excels in specific scenarios. Learn to recognize these patterns:
🎯 Use DFS when you need to:
- Explore all paths or solutions (exhaustive search)
- Detect cycles in a graph
- Perform topological sorting
- Find connected components
- Solve maze or puzzle problems
- Implement backtracking algorithms
- Process tree-like structures
❌ Wrong thinking: "DFS always finds the shortest path because it explores deeply."
✅ Correct thinking: "DFS does NOT guarantee shortest paths. It explores depth-first and may find a long path before a short one. Use BFS for shortest paths in unweighted graphs."
🧠 Mnemonic: DFS for Depth, Dependencies, and Detection (cycles, components). Think "go deep" for exploration, "go deep" for understanding dependencies.
Putting It All Together: A Complete Example
Let's solve a classic LeetCode-style problem using DFS:
Problem: Given an m × n grid where 1 represents land and 0 represents water, count the number of islands. An island is surrounded by water and formed by connecting adjacent lands horizontally or vertically.
def num_islands(grid):
"""
Counts number of islands in a 2D grid using DFS.
Time: O(m * n) - visit each cell once
Space: O(m * n) - recursion depth in worst case
"""
if not grid or not grid[0]:
return 0
rows, cols = len(grid), len(grid[0])
islands = 0
def dfs(r, c):
# Boundary check and water check
if (r < 0 or r >= rows or c < 0 or c >= cols or
grid[r][c] == '0'):
return
# Mark as visited by changing to '0' (in-place marking)
grid[r][c] = '0'
# Explore all 4 directions
dfs(r + 1, c) # down
dfs(r - 1, c) # up
dfs(r, c + 1) # right
dfs(r, c - 1) # left
# Check every cell in the grid
for r in range(rows):
for c in range(cols):
if grid[r][c] == '1':
# Found unvisited land - start DFS to explore entire island
dfs(r, c)
islands += 1 # Increment after exploring complete island
return islands
## Test case
grid = [
['1', '1', '0', '0', '0'],
['1', '1', '0', '0', '0'],
['0', '0', '1', '0', '0'],
['0', '0', '0', '1', '1']
]
print(num_islands(grid)) # Output: 3
This example demonstrates:
- In-place marking to save space
- Connected components pattern
- Grid traversal with boundary checking
- Backtracking through recursion
Summary: DFS Mastery Checklist
You've now covered the essential foundations of DFS. Here's what you should be comfortable with:
✅ Understanding stack-based exploration and backtracking mechanics
✅ Implementing both recursive and iterative versions
✅ Analyzing O(V + E) time and O(V) space complexity
✅ Recognizing and applying common patterns (cycle detection, path finding, connected components, topological sort)
✅ Choosing appropriate visited tracking strategies
✅ Identifying when DFS is the right tool for the problem
The beauty of DFS lies in its simplicity and versatility. With these fundamentals mastered, you're ready to tackle the majority of graph problems on LeetCode. In the next section, we'll explore Breadth-First Search and learn when it outperforms DFS, particularly for shortest path problems.
Breadth-First Search (BFS): Concepts and Implementation
Imagine you're standing at the center of a maze, and you want to find the shortest path to the exit. You could wander randomly, or you could be systematic: check every location one step away, then every location two steps away, then three steps away, and so on. This expanding circle of exploration is exactly how Breadth-First Search (BFS) works, and it's one of the most powerful techniques in your algorithm toolkit.
Understanding the BFS Mindset
While DFS dives deep into one path before exploring alternatives, BFS takes a fundamentally different approach: it explores all neighbors at the current depth before moving deeper. Think of it like ripples spreading across water after dropping a stone—each ripple represents a level of exploration, expanding outward uniformly.
🎯 Key Principle: BFS explores nodes in order of their distance from the starting point. The first time BFS reaches any node is guaranteed to be via the shortest path.
This level-by-level exploration pattern makes BFS the algorithm of choice for several critical scenarios:
🎯 Finding shortest paths in unweighted graphs 🎯 Level-order traversals in trees 🎯 Finding minimum steps to reach a target state 🎯 Detecting connected components at specific distances
The Queue: BFS's Essential Data Structure
The heart of BFS is the queue data structure, which enforces a First-In-First-Out (FIFO) order. This ordering is what gives BFS its level-by-level behavior. When we discover a node, we add it to the back of the queue. When we want to explore the next node, we take it from the front of the queue.
Queue Behavior in BFS:
Start: [A] (initial node)
Step 1: [B, C, D] (A's neighbors added)
Step 2: [C, D, E, F] (processed B, added its neighbors)
Step 3: [D, E, F, G] (processed C, added its neighbor)
Notice how nodes are processed in the exact order they were discovered. This ensures that all nodes at distance k are processed before any node at distance k+1.
💡 Mental Model: Think of the queue as a waiting line at a ticket counter. Early arrivals (closer nodes) get served first, and newcomers (farther nodes) join the back of the line.
Basic BFS Implementation Pattern
Let's build BFS from the ground up. Here's the canonical template that works for most graph problems:
from collections import deque
def bfs(graph, start):
"""
Basic BFS traversal of a graph.
Args:
graph: adjacency list representation {node: [neighbors]}
start: starting node
Returns:
list of nodes in BFS order
"""
visited = set() # Track visited nodes
queue = deque([start]) # Initialize queue with start node
visited.add(start)
result = []
while queue:
node = queue.popleft() # Get next node (FIFO)
result.append(node)
# Explore all neighbors
for neighbor in graph[node]:
if neighbor not in visited:
visited.add(neighbor) # Mark as visited
queue.append(neighbor) # Add to queue
return result
## Example usage
graph = {
'A': ['B', 'C'],
'B': ['A', 'D', 'E'],
'C': ['A', 'F'],
'D': ['B'],
'E': ['B', 'F'],
'F': ['C', 'E']
}
print(bfs(graph, 'A')) # Output: ['A', 'B', 'C', 'D', 'E', 'F']
⚠️ Common Mistake 1: Adding nodes to visited when they're dequeued instead of when they're enqueued. This causes duplicate nodes in the queue and potentially exponential time complexity. Always mark nodes as visited when adding them to the queue! ⚠️
Let's trace through this algorithm visually:
Graph Structure: BFS Exploration:
A Level 0: A
/ \ Level 1: B, C
B C Level 2: D, E, F
/ \ / \
D E-F
Queue Evolution:
Step 1: [A] -> Visit A
Step 2: [B, C] -> Visit B
Step 3: [C, D, E] -> Visit C
Step 4: [D, E, F] -> Visit D
Step 5: [E, F] -> Visit E
Step 6: [F] -> Visit F
Why BFS Guarantees Shortest Paths
One of BFS's most powerful properties is its shortest path guarantee in unweighted graphs. This isn't just a happy accident—it's a mathematical certainty that follows from the algorithm's structure.
🎯 Key Principle: In an unweighted graph, the first time BFS visits a node, it has found the shortest path to that node.
The proof is elegant: BFS processes nodes in increasing order of distance from the start. When BFS reaches a node v at distance d, it means:
- All nodes at distance
d-1have already been processed - Node
vis being reached from a node at distanced-1 - Therefore, there cannot be a shorter path to
v(which would have been found earlier)
Here's BFS modified to track distances:
from collections import deque
def bfs_shortest_path(graph, start, target):
"""
Find shortest path length from start to target.
Returns -1 if no path exists.
"""
if start == target:
return 0
visited = set([start])
queue = deque([(start, 0)]) # (node, distance) pairs
while queue:
node, distance = queue.popleft()
for neighbor in graph[node]:
if neighbor == target:
return distance + 1 # Found target!
if neighbor not in visited:
visited.add(neighbor)
queue.append((neighbor, distance + 1))
return -1 # No path exists
💡 Pro Tip: You can also track distances using a separate dictionary or by processing the queue level-by-level (explained in the next section).
Level-Order Processing Pattern
Many LeetCode problems require you to process nodes level-by-level, where you need to know which nodes belong to the same level. This is common in tree problems and in questions asking for "minimum steps" or "shortest transformation sequence."
The key technique is to process the queue in batches, where each batch represents one complete level:
from collections import deque
def bfs_level_order(graph, start):
"""
BFS that processes nodes level-by-level.
Returns a list of levels, where each level is a list of nodes.
"""
visited = set([start])
queue = deque([start])
levels = []
while queue:
level_size = len(queue) # Number of nodes at current level
current_level = []
# Process all nodes at current level
for _ in range(level_size):
node = queue.popleft()
current_level.append(node)
# Add next level's nodes
for neighbor in graph[node]:
if neighbor not in visited:
visited.add(neighbor)
queue.append(neighbor)
levels.append(current_level)
return levels
## Example: Binary tree level order traversal
## LeetCode 102: Binary Tree Level Order Traversal
def levelOrder(root):
if not root:
return []
result = []
queue = deque([root])
while queue:
level_size = len(queue)
current_level = []
for _ in range(level_size):
node = queue.popleft()
current_level.append(node.val)
if node.left:
queue.append(node.left)
if node.right:
queue.append(node.right)
result.append(current_level)
return result
The magic happens on this line: level_size = len(queue). By capturing the queue size before processing, we know exactly how many nodes belong to the current level. We process that exact number of nodes, and all their neighbors form the next level.
Visualization of Level Processing:
Tree: 1
/ \
2 3
/ \ /
4 5 6
Queue States:
Level 0: [1] -> level_size = 1, process [1]
Level 1: [2, 3] -> level_size = 2, process [2, 3]
Level 2: [4, 5, 6] -> level_size = 3, process [4, 5, 6]
Common BFS Patterns in LeetCode
Let's explore the most frequently appearing BFS patterns and when to use each one.
Pattern 1: Shortest Path in Unweighted Graph
Use when: Finding minimum distance, minimum steps, or shortest transformation sequence.
Examples: LeetCode 127 (Word Ladder), LeetCode 1091 (Shortest Path in Binary Matrix)
Template:
def shortest_path(start, target):
queue = deque([(start, 0)]) # (state, steps)
visited = set([start])
while queue:
state, steps = queue.popleft()
if state == target:
return steps
for next_state in get_neighbors(state):
if next_state not in visited:
visited.add(next_state)
queue.append((next_state, steps + 1))
return -1 # No path found
Pattern 2: Multi-Source BFS
Multi-source BFS is a powerful variation where you start from multiple sources simultaneously. Instead of adding one starting node to the queue, you add all source nodes at once. This is like dropping multiple stones in water—all ripples expand together.
Use when: Problems mention "nearest," "closest," or involve spreading from multiple points (like virus spread, rotting oranges, or finding distance to nearest exit).
Example: LeetCode 994 (Rotting Oranges)
from collections import deque
def orangesRotting(grid):
"""
Find minimum time for all oranges to rot.
Rotten oranges spread to adjacent fresh oranges each minute.
"""
rows, cols = len(grid), len(grid[0])
queue = deque()
fresh_count = 0
# Add all initially rotten oranges to queue
for r in range(rows):
for c in range(cols):
if grid[r][c] == 2: # Rotten
queue.append((r, c, 0)) # (row, col, time)
elif grid[r][c] == 1: # Fresh
fresh_count += 1
if fresh_count == 0:
return 0
directions = [(0,1), (1,0), (0,-1), (-1,0)]
max_time = 0
while queue:
r, c, time = queue.popleft()
max_time = max(max_time, time)
for dr, dc in directions:
nr, nc = r + dr, c + dc
# If adjacent orange is fresh, it becomes rotten
if 0 <= nr < rows and 0 <= nc < cols and grid[nr][nc] == 1:
grid[nr][nc] = 2 # Mark as rotten
fresh_count -= 1
queue.append((nr, nc, time + 1))
return max_time if fresh_count == 0 else -1
💡 Mental Model: Multi-source BFS is like having multiple starting points in a race. All runners start simultaneously, and we track how they all spread out together.
Pattern 3: Level-Order Traversal with Level Information
Use when: You need to perform operations based on the level number, or return results organized by level.
Examples: LeetCode 102 (Binary Tree Level Order Traversal), LeetCode 103 (Binary Tree Zigzag Level Order Traversal)
We've already seen this pattern above—the key is capturing len(queue) before processing each level.
Pattern 4: BFS with State Transformation
Use when: Each "node" represents a state in a problem, and edges represent valid state transitions.
Examples: LeetCode 752 (Open the Lock), LeetCode 773 (Sliding Puzzle)
In these problems, you're not traversing a traditional graph. Instead, you're exploring a state space where each state can transform into other states.
def openLock(deadends, target):
"""
LeetCode 752: Minimum turns to open a lock (0000 to target).
Each turn can increment or decrement any of 4 wheels.
"""
if "0000" in deadends:
return -1
if target == "0000":
return 0
dead = set(deadends)
visited = set(["0000"])
queue = deque([("0000", 0)]) # (state, turns)
def get_neighbors(state):
"""Generate all possible next states (8 neighbors)"""
neighbors = []
for i in range(4):
digit = int(state[i])
# Turn wheel up and down
for diff in [-1, 1]:
new_digit = (digit + diff) % 10
new_state = state[:i] + str(new_digit) + state[i+1:]
neighbors.append(new_state)
return neighbors
while queue:
state, turns = queue.popleft()
for next_state in get_neighbors(state):
if next_state == target:
return turns + 1
if next_state not in visited and next_state not in dead:
visited.add(next_state)
queue.append((next_state, turns + 1))
return -1
🤔 Did you know? State-space BFS problems often have enormous implicit graphs. The lock problem has 10,000 possible states (0000-9999), but we never store the entire graph—we generate neighbors on the fly!
Bidirectional BFS: A Powerful Optimization
Bidirectional BFS is an advanced optimization that can dramatically reduce search time when you know both the start and end points. Instead of searching from start to target, you search from both ends simultaneously and meet in the middle.
Why it's faster: If regular BFS explores b^d nodes (where b is branching factor and d is depth), bidirectional BFS explores approximately 2 * b^(d/2) nodes. For large branching factors, this is exponentially better!
Regular BFS: Bidirectional BFS:
Start → → → Target Start → → ← ← Target
\ /
Meet here!
Nodes explored: b^d Nodes explored: 2*b^(d/2)
💡 Real-World Example: Imagine finding a path between two people on LinkedIn. Regular BFS explores all your connections (1st degree), then their connections (2nd degree), expanding rapidly. Bidirectional BFS searches from both people simultaneously, meeting in the middle with far fewer nodes explored.
Implementation pattern:
def bidirectional_bfs(start, target, get_neighbors):
"""
Bidirectional BFS to find shortest path.
Args:
start: starting node
target: target node
get_neighbors: function that returns neighbors of a node
Returns:
shortest path length, or -1 if no path exists
"""
if start == target:
return 0
# Two frontiers: searching from start and from target
front_start = {start}
front_target = {target}
# Visited sets for both directions
visited_start = {start}
visited_target = {target}
steps = 0
while front_start and front_target:
steps += 1
# Always expand the smaller frontier (optimization)
if len(front_start) > len(front_target):
front_start, front_target = front_target, front_start
visited_start, visited_target = visited_target, visited_start
# Expand the smaller frontier
next_front = set()
for node in front_start:
for neighbor in get_neighbors(node):
# Check if we've met the other search
if neighbor in visited_target:
return steps
if neighbor not in visited_start:
visited_start.add(neighbor)
next_front.add(neighbor)
front_start = next_front
return -1 # No path found
⚠️ Common Mistake 2: Not always expanding the smaller frontier. This optimization ensures we minimize the total number of nodes explored. ⚠️
🎯 Key Principle: Bidirectional BFS is most effective when the branching factor is high and the target is known. It's overkill for simple tree traversals but invaluable for complex state-space searches.
Time and Space Complexity Analysis
Understanding BFS complexity helps you predict performance and choose the right algorithm:
📋 Quick Reference Card:
| Metric | Complexity | Explanation |
|---|---|---|
| ⏱️ Time | O(V + E) | Visit each vertex once, explore each edge once |
| 💾 Space | O(V) | Queue can contain up to all vertices in worst case |
| 🎯 Shortest Path | Guaranteed | In unweighted graphs only |
| 📊 Optimal When | Shallow solutions | BFS excels when target is close to start |
Detailed breakdown:
- V = number of vertices (nodes)
- E = number of edges
- Each node is enqueued and dequeued exactly once: O(V)
- Each edge is examined exactly once: O(E)
- Total: O(V + E)
Space complexity considerations:
🔧 Queue space: In the worst case (complete graph, exploring last level), the queue could contain O(V) nodes
🔧 Visited set: Always requires O(V) space to track all visited nodes
🔧 Level-order BFS: May require O(W) space where W is the maximum width of any level (in trees, this is at most V/2)
💡 Pro Tip: If space is critical and you only need to know if a path exists (not the path itself), you might consider iterative deepening DFS as an alternative with O(d) space where d is depth.
BFS vs DFS: When to Choose BFS
Knowing when to use BFS over DFS is crucial for LeetCode success:
✅ Choose BFS when:
- Finding shortest path in unweighted graphs
- Need level-order information
- Solution is likely shallow (close to start)
- Need minimum steps or minimum transformations
- Multiple sources spreading simultaneously
❌ Don't choose BFS when:
- Graph is very deep and solution is far from start (memory intensive)
- You need to explore all paths (DFS with backtracking is better)
- Problem involves combinations or permutations
- You're checking connectivity and path length doesn't matter (DFS is simpler)
💡 Remember: BFS uses more memory than DFS but guarantees shortest paths. DFS uses less memory but may find longer paths first.
Practical Tips for LeetCode BFS Problems
🔧 Use collections.deque: Python's deque provides O(1) append and popleft operations. Using a list with pop(0) is O(n)!
🔧 Mark visited early: Add nodes to visited when you enqueue them, not when you dequeue them. This prevents duplicates in the queue.
🔧 Handle visited smartly: For grids, you can modify the grid in-place instead of using a separate visited set (if modification is allowed).
🔧 Count steps correctly: Use either (node, distance) tuples or level-by-level processing. Don't try to track distance with a separate variable—it's error-prone.
🔧 Validate neighbors: Always check bounds, obstacles, and visited status before adding to queue.
⚠️ Common Mistake 3: Forgetting to check if start equals target before starting BFS. This simple check saves unnecessary work! ⚠️
Summary: The Power of BFS
Breadth-First Search is your go-to algorithm whenever you see keywords like "shortest," "minimum steps," or "nearest" in unweighted scenarios. Its queue-based, level-by-level exploration guarantees that the first time you reach any node, you've found the shortest path to it.
The key patterns to master are:
🧠 Basic BFS with queue and visited set 🧠 Level-order processing by capturing queue length 🧠 Multi-source BFS for simultaneous spreading 🧠 State-space BFS for transformation problems 🧠 Bidirectional BFS for optimization when target is known
With these patterns in your toolkit, you're equipped to tackle a wide range of LeetCode graph problems. The next section will show you how to recognize which pattern applies to specific problems and provide real LeetCode examples to practice on.
🧠 Mnemonic: "BFS = Breadth First = Best For Shortest" (in unweighted graphs)
Practical Implementation Patterns for LeetCode
Now that we understand the mechanics of DFS and BFS, let's transform that knowledge into practical, reusable code patterns you can deploy on LeetCode problems. The difference between knowing an algorithm and solving interview problems is having battle-tested templates at your fingertips and recognizing which pattern a problem requires. This section gives you both: the templates and the pattern recognition skills.
Building Your DFS Template Library
The key to mastering DFS problems is having a mental template you can adapt quickly. Let's build two essential DFS templates: recursive and iterative. Both accomplish the same traversal, but each has specific use cases where it shines.
Recursive DFS Template
The recursive approach is the most natural way to think about DFS because it leverages the call stack to manage our exploration path. Here's a comprehensive template that handles most graph scenarios:
def dfs_recursive(graph, start):
visited = set() # Track visited nodes to avoid cycles
result = [] # Store whatever you're collecting
def dfs(node):
# Base case: already visited
if node in visited:
return
# Mark as visited BEFORE recursive calls
visited.add(node)
# Process current node (pre-order position)
result.append(node)
# Explore all neighbors
for neighbor in graph[node]:
dfs(neighbor)
# Post-processing happens here (post-order position)
# Useful for: topological sort, finding strongly connected components
dfs(start)
return result
🎯 Key Principle: The placement of visited.add(node) matters enormously. Adding it before recursive calls prevents infinite loops in cyclic graphs. Adding it after would cause problems because you'd recurse into the same node multiple times.
⚠️ Common Mistake 1: Forgetting to check if node in visited before adding to visited set in recursive calls. This can cause subtle bugs in complex graphs. ⚠️
Iterative DFS Template
The iterative approach uses an explicit stack instead of the call stack. This is crucial when dealing with deep graphs where recursion might cause stack overflow:
def dfs_iterative(graph, start):
visited = set()
stack = [start]
result = []
while stack:
node = stack.pop() # LIFO: Last In, First Out
if node in visited:
continue
visited.add(node)
result.append(node)
# Add neighbors to stack (reverse order for same traversal as recursive)
for neighbor in reversed(graph[node]):
if neighbor not in visited:
stack.append(neighbor)
return result
💡 Pro Tip: Notice we check if node in visited after popping from the stack in the iterative version, but before processing in the recursive version. This subtle difference is because we might add the same node to the stack multiple times before processing it, but in recursion, we control when we make the call.
Building Your BFS Template Library
BFS shines when you need to find shortest paths or process nodes level by level. The fundamental difference from DFS is using a queue (FIFO) instead of a stack (LIFO).
Standard BFS Template with Distance Tracking
This template is your go-to for shortest path problems:
from collections import deque
def bfs_with_distance(graph, start):
visited = set([start])
queue = deque([(start, 0)]) # (node, distance) pairs
distances = {start: 0}
while queue:
node, dist = queue.popleft() # FIFO: First In, First Out
# Process current node
# You can access dist to know how far from start
for neighbor in graph[node]:
if neighbor not in visited:
visited.add(neighbor)
distances[neighbor] = dist + 1
queue.append((neighbor, dist + 1))
return distances
🎯 Key Principle: In BFS, mark nodes as visited when you add them to the queue, not when you pop them. This prevents adding the same node multiple times and is crucial for correctness.
Level-Order BFS Template
Some problems require processing all nodes at the same distance together (like tree level-order traversal):
def bfs_level_order(graph, start):
visited = set([start])
queue = deque([start])
levels = []
while queue:
level_size = len(queue) # Snapshot of current level size
current_level = []
# Process entire level
for _ in range(level_size):
node = queue.popleft()
current_level.append(node)
for neighbor in graph[node]:
if neighbor not in visited:
visited.add(neighbor)
queue.append(neighbor)
levels.append(current_level)
return levels
💡 Mental Model: Think of BFS as a wave spreading outward from the source. Each "wave" is a level, and level_size = len(queue) captures exactly one wave before the next wave starts forming.
Problem Identification: DFS vs BFS
Recognizing which algorithm to use is half the battle. Here's your decision framework:
Use DFS when:
- 🔧 You need to explore all paths or check if a path exists
- 🔧 The problem involves backtracking (trying different choices)
- 🔧 You're looking for any solution, not necessarily the shortest
- 🔧 The graph is a tree or you need topological sorting
- 🔧 You're doing cycle detection in directed graphs
Use BFS when:
- 🔧 You need the shortest path in an unweighted graph
- 🔧 You need to process nodes level by level
- 🔧 You're finding minimum steps/moves to reach a target
- 🔧 The graph might be very deep (avoid stack overflow)
- 🔧 You want to find all nodes within k distance
📋 Quick Reference Card:
| 🎯 Problem Keyword | 🔍 Algorithm | 💭 Reason |
|---|---|---|
| "shortest path" | BFS | Guarantees minimum steps |
| "minimum steps" | BFS | Same as shortest path |
| "find all paths" | DFS | Need to explore exhaustively |
| "can reach" | DFS or BFS | Either works, DFS simpler |
| "level order" | BFS | Natural level tracking |
| "topological sort" | DFS | Post-order processing |
| "cycle detection" | DFS | Track recursion stack |
| "all nodes at distance k" | BFS | Distance tracking built-in |
Graph Construction from Various Input Formats
Before applying DFS or BFS, you often need to build the graph from input data. Let's master the common formats:
Adjacency List from Edge List
## Input: edges = [[0,1], [1,2], [2,0]]
def build_adjacency_list(n, edges, directed=True):
graph = {i: [] for i in range(n)}
for u, v in edges:
graph[u].append(v)
if not directed:
graph[v].append(u) # Add reverse edge for undirected
return graph
Implicit Graphs from Grids
Many LeetCode problems present grids where cells are implicitly nodes:
def get_neighbors(grid, row, col):
"""Get valid neighboring cells (4-directional)"""
rows, cols = len(grid), len(grid[0])
directions = [(0, 1), (1, 0), (0, -1), (-1, 0)] # right, down, left, up
neighbors = []
for dr, dc in directions:
new_row, new_col = row + dr, col + dc
if 0 <= new_row < rows and 0 <= new_col < cols:
neighbors.append((new_row, new_col))
return neighbors
💡 Pro Tip: For grid problems, you can use the direction vectors [(0,1), (1,0), (0,-1), (-1,0)] to avoid writing four separate if-statements. This pattern appears in countless problems.
⚠️ Common Mistake 2: Forgetting to check bounds when accessing grid neighbors. Always validate 0 <= row < len(grid) and 0 <= col < len(grid[0]). ⚠️
Classic LeetCode Problem Walkthroughs
Let's apply our templates to real problems. These four examples cover the most common patterns you'll encounter.
Problem 1: Number of Islands (LeetCode #200)
Problem: Given a 2D grid of '1's (land) and '0's (water), count the number of islands. An island is surrounded by water and formed by connecting adjacent lands horizontally or vertically.
Pattern Recognition: This is a connected components problem on an implicit grid graph. We need to explore each island completely, making DFS or BFS both valid. DFS is slightly simpler.
Solution Approach:
- Iterate through each cell in the grid
- When we find unvisited land ('1'), increment island count
- Use DFS to mark all connected land cells as visited
- Continue until all cells are processed
def numIslands(grid):
if not grid:
return 0
rows, cols = len(grid), len(grid[0])
visited = set()
islands = 0
def dfs(r, c):
# Boundary check and visited check
if (r < 0 or r >= rows or c < 0 or c >= cols or
(r, c) in visited or grid[r][c] == '0'):
return
visited.add((r, c))
# Explore all 4 directions
dfs(r + 1, c) # down
dfs(r - 1, c) # up
dfs(r, c + 1) # right
dfs(r, c - 1) # left
# Main loop: check every cell
for r in range(rows):
for c in range(cols):
if grid[r][c] == '1' and (r, c) not in visited:
islands += 1
dfs(r, c) # Mark entire island as visited
return islands
🧠 Mnemonic: Think "Find one, flood fill" - find one land cell, then flood fill the entire island.
Complexity Analysis:
- Time: O(rows × cols) - we visit each cell at most once
- Space: O(rows × cols) - for the visited set and recursion stack in worst case
💡 Optimization Tip: You can modify the grid in-place (changing '1' to '0') instead of using a visited set, reducing space to O(min(rows, cols)) for the recursion stack only.
Problem 2: Course Schedule (LeetCode #207)
Problem: There are numCourses courses labeled 0 to numCourses-1. You're given prerequisites array where prerequisites[i] = [ai, bi] indicates you must take course bi before course ai. Return true if you can finish all courses.
Pattern Recognition: This is a cycle detection problem in a directed graph. If there's a cycle in the prerequisite graph, it's impossible to complete all courses (circular dependency). This requires DFS with recursion stack tracking.
Solution Approach:
- Build adjacency list from prerequisites
- Use DFS with three states: unvisited, visiting (in current path), visited (fully processed)
- If we encounter a "visiting" node during DFS, we've found a cycle
- If all DFS explorations complete without cycles, return true
def canFinish(numCourses, prerequisites):
# Build adjacency list
graph = {i: [] for i in range(numCourses)}
for course, prereq in prerequisites:
graph[prereq].append(course)
# Three states: 0 = unvisited, 1 = visiting, 2 = visited
state = [0] * numCourses
def has_cycle(course):
if state[course] == 1: # Currently visiting - cycle detected!
return True
if state[course] == 2: # Already fully processed
return False
# Mark as visiting
state[course] = 1
# Check all neighbors (courses that depend on this one)
for next_course in graph[course]:
if has_cycle(next_course):
return True
# Mark as visited (fully processed)
state[course] = 2
return False
# Check each course (graph might be disconnected)
for course in range(numCourses):
if has_cycle(course):
return False
return True
🎯 Key Principle: The three-state system is crucial. "Visiting" (state 1) represents nodes in the current DFS path. If we reach a "visiting" node again, we've found a cycle because we've looped back to our current path.
Visual Representation:
Course Dependencies: 1->0, 2->1, 3->2, 3->1
3 ──┐
│ │
↓ ↓
2 1
│ │
└──→0
DFS from 3:
3 (visiting)
↓
2 (visiting)
↓
1 (visiting)
↓
0 (visiting → visited)
← back to 1 (visiting → visited)
← back to 2, but 1 already visited ✓
← back to 3 (visiting → visited)
No cycle detected - return True
Complexity Analysis:
- Time: O(V + E) where V = numCourses, E = prerequisites length
- Space: O(V + E) for graph and recursion stack
Problem 3: Word Ladder (LeetCode #127)
Problem: Given two words, beginWord and endWord, and a dictionary wordList, find the length of the shortest transformation sequence from beginWord to endWord, where each transformed word must exist in wordList and you can only change one letter at a time.
Pattern Recognition: The keywords "shortest" and "minimum" immediately suggest BFS. We need to find the minimum number of transformations, which is a shortest path problem in an implicit graph where words are nodes and edges connect words differing by one letter.
Solution Approach:
- Build a graph (or use implicit neighbors by checking all one-letter differences)
- Use BFS starting from
beginWord - Track distance/level to know transformation count
- Return distance when we reach
endWord
from collections import deque
def ladderLength(beginWord, endWord, wordList):
# Convert to set for O(1) lookup
word_set = set(wordList)
if endWord not in word_set:
return 0
queue = deque([(beginWord, 1)]) # (word, level)
visited = {beginWord}
while queue:
word, level = queue.popleft()
# Try changing each character
for i in range(len(word)):
# Try all 26 possible letters
for c in 'abcdefghijklmnopqrstuvwxyz':
next_word = word[:i] + c + word[i+1:]
# Found the target!
if next_word == endWord:
return level + 1
# Valid intermediate word
if next_word in word_set and next_word not in visited:
visited.add(next_word)
queue.append((next_word, level + 1))
return 0 # No path found
💡 Pro Tip: Instead of pre-building the entire graph (which is expensive), we generate neighbors on-the-fly by trying all possible one-letter changes. This is an example of an implicit graph where edges are determined by a rule rather than explicitly stored.
Why BFS Works Here:
beginWord: "hit"
endWord: "cog"
wordList: ["hot","dot","dog","lot","log","cog"]
BFS Exploration:
Level 1: hit
Level 2: hot (change i→o)
Level 3: dot, lot (from hot)
Level 4: dog, log (from dot, lot)
Level 5: cog (from dog or log) ← Shortest path found!
Answer: 5 transformations
BFS guarantees we find "cog" at the earliest possible level, giving us the shortest transformation sequence.
Complexity Analysis:
- Time: O(M² × N) where M = word length, N = wordList size
- For each word, we try M positions × 26 letters = O(M × 26)
- Creating each new word string is O(M)
- We process at most N words
- Space: O(N) for visited set and queue
⚠️ Common Mistake 3: Forgetting to check if endWord exists in wordList before starting BFS. This causes unnecessary computation or wrong answers. ⚠️
Problem 4: Clone Graph (LeetCode #133)
Problem: Given a reference to a node in a connected undirected graph, return a deep copy of the graph. Each node contains a value and a list of neighbors.
Pattern Recognition: This requires graph traversal (DFS or BFS) with a mapping from original nodes to cloned nodes. Either algorithm works; DFS is slightly more elegant.
Solution Approach:
- Create a hash map to store original → clone mappings
- Use DFS to traverse the graph
- For each node, create a clone and recursively clone all neighbors
- Use the hash map to avoid infinite loops and duplicate work
class Node:
def __init__(self, val=0, neighbors=None):
self.val = val
self.neighbors = neighbors if neighbors is not None else []
def cloneGraph(node):
if not node:
return None
# Map original nodes to their clones
cloned = {}
def dfs(original):
# Already cloned this node
if original in cloned:
return cloned[original]
# Create clone
clone = Node(original.val)
cloned[original] = clone # Store BEFORE recursive calls!
# Clone all neighbors
for neighbor in original.neighbors:
clone.neighbors.append(dfs(neighbor))
return clone
return dfs(node)
🎯 Key Principle: We must store the clone in the hash map before recursively cloning neighbors. This prevents infinite loops in cyclic graphs because when we encounter the same node again through a different path, we'll find it already in the map.
Visual Example:
Original Graph: Cloning Process:
1 ─── 2 1' ─── 2'
│ │ │ │
└─ 3 ─┘ └─ 3' ─┘
Step 1: Clone node 1 → 1'
Step 2: Recursively clone neighbors of 1 (2, 3)
Step 3: Clone node 2 → 2' (add to 1'.neighbors)
Step 4: Recursively clone neighbors of 2 (1, 3)
- 1 already in map → use 1'
- 3 not in map yet → clone it
Step 5: Clone node 3 → 3'
...
Complexity Analysis:
- Time: O(N + E) where N = number of nodes, E = number of edges
- Space: O(N) for the hash map and recursion stack
Template Selection Flowchart
Now that you've seen these patterns in action, here's a decision tree for quick template selection:
Start: Read problem statement
|
├─ "shortest/minimum/fewest" → BFS with distance
|
├─ "all paths/find any path" → DFS recursive
|
├─ "level by level/distance k" → BFS level-order
|
├─ "cycle detection" → DFS with 3 states
|
├─ "topological sort" → DFS post-order
|
├─ "connected components" → DFS or BFS (either works)
|
└─ "clone/copy graph" → DFS/BFS with hash map
Advanced Pattern: Bidirectional BFS
For problems like Word Ladder where both start and end are known, bidirectional BFS can dramatically improve performance:
💡 Mental Model: Instead of one wave expanding from the start, imagine two waves expanding from both start and end simultaneously. When they meet, you've found the shortest path.
def ladderLength_bidirectional(beginWord, endWord, wordList):
word_set = set(wordList)
if endWord not in word_set:
return 0
# Two frontiers
front = {beginWord}
back = {endWord}
word_set.discard(beginWord)
word_set.discard(endWord)
distance = 1
while front and back:
# Always expand the smaller frontier
if len(front) > len(back):
front, back = back, front
next_front = set()
for word in front:
for i in range(len(word)):
for c in 'abcdefghijklmnopqrstuvwxyz':
next_word = word[:i] + c + word[i+1:]
# Frontiers met!
if next_word in back:
return distance + 1
if next_word in word_set:
next_front.add(next_word)
word_set.discard(next_word)
front = next_front
distance += 1
return 0
🤔 Did you know? Bidirectional BFS can reduce time complexity from O(bd) to O(b(d/2)) where b is the branching factor and d is the distance, because you're essentially searching two trees of depth d/2 instead of one tree of depth d.
Putting It All Together
You now have a comprehensive toolkit for tackling graph problems on LeetCode:
🧠 Reusable Templates:
- DFS recursive (for path finding, cycle detection)
- DFS iterative (for deep graphs)
- BFS with distance (for shortest paths)
- BFS level-order (for level processing)
🔧 Recognition Skills:
- Keyword-based algorithm selection
- Input format → graph construction
- Problem constraints → optimization choices
📚 Proven Solutions:
- Number of Islands (connected components)
- Course Schedule (cycle detection)
- Word Ladder (shortest path in implicit graph)
- Clone Graph (traversal with mapping)
✅ Correct thinking: When you encounter a new graph problem, first identify the pattern (shortest path? all paths? cycle?), then select your template, finally adapt it to the specific input format and requirements.
❌ Wrong thinking: Trying to write graph traversal code from scratch each time, leading to bugs in visited tracking, boundary conditions, or choosing the wrong algorithm entirely.
The next time you face a graph problem, pause and ask: "What am I really looking for?" The answer will point you to the right template, and from there, it's just a matter of careful implementation.
Common Pitfalls and Best Practices
Mastering graph algorithms isn't just about understanding the theory—it's about avoiding the subtle bugs and performance issues that can turn a correct approach into a Time Limit Exceeded (TLE) or Wrong Answer verdict. After reviewing thousands of failed submissions, patterns emerge: certain mistakes appear again and again, even among experienced developers. This section will arm you with the awareness and defensive coding practices to sidestep these common traps.
Pitfall 1: Forgetting to Mark Nodes as Visited
⚠️ Common Mistake: The Infinite Loop Disaster ⚠️
The single most frequent mistake in graph traversal is failing to track visited nodes, leading to infinite loops that cause TLE. This happens because graphs often contain cycles, and without a visited set, your algorithm will traverse the same nodes repeatedly, never terminating.
Consider this flawed DFS implementation:
## ❌ BUGGY CODE - DO NOT USE
def dfs_buggy(graph, node):
# Missing visited tracking!
for neighbor in graph[node]:
dfs_buggy(graph, neighbor)
## Graph with cycle: 0 -> 1 -> 2 -> 0
graph = {0: [1], 1: [2], 2: [0]}
dfs_buggy(graph, 0) # Infinite recursion!
This code will recurse forever: 0 → 1 → 2 → 0 → 1 → 2 → 0... until you hit a stack overflow or TLE.
✅ Correct approach with visited tracking:
def dfs_correct(graph, node, visited):
if node in visited:
return
visited.add(node) # Mark as visited BEFORE recursing
for neighbor in graph[node]:
dfs_correct(graph, neighbor, visited)
## Usage
visited = set()
graph = {0: [1], 1: [2], 2: [0]}
dfs_correct(graph, 0, visited) # Terminates correctly
🎯 Key Principle: Every node should be visited exactly once. The visited set is your safeguard against cycles and redundant work.
💡 Pro Tip: For grid problems, you can sometimes mark cells as visited by modifying them in-place (e.g., changing '1' to '0' in an islands problem). This saves the memory overhead of a separate visited set, but only works when you don't need the original values later.
Pitfall 2: Incorrect Visited Tracking Timing in BFS
While forgetting visited tracking is catastrophic, marking nodes at the wrong time is a more subtle bug that can cause incorrect results or performance degradation.
⚠️ The Critical Question: Should you mark a node as visited when you add it to the queue or when you pop it from the queue?
❌ Wrong thinking: "I'll mark nodes as visited when I process them (pop from queue)"
✅ Correct thinking: "I should mark nodes as visited when I first discover them (add to queue)"
Here's why this matters:
## ❌ INEFFICIENT - Marking visited too late
from collections import deque
def bfs_inefficient(graph, start):
queue = deque([start])
visited = set()
while queue:
node = queue.popleft()
if node in visited: # Checking here is too late!
continue
visited.add(node) # Marking after popping
for neighbor in graph[node]:
if neighbor not in visited:
queue.append(neighbor)
## ✅ EFFICIENT - Marking visited at discovery time
def bfs_efficient(graph, start):
queue = deque([start])
visited = {start} # Mark start as visited immediately
while queue:
node = queue.popleft()
for neighbor in graph[node]:
if neighbor not in visited:
visited.add(neighbor) # Mark BEFORE adding to queue
queue.append(neighbor)
Why does timing matter? In the inefficient version, the same node can be added to the queue multiple times before it's processed. Consider a graph where node 0 connects to nodes 1 and 2, and both 1 and 2 connect to node 3:
0
/ \
1 2
\ /
3
With late marking, node 3 gets added to the queue twice (once from node 1, once from node 2). For dense graphs, this can explode your queue size and cause TLE.
💡 Mental Model: Think of "visited" as "discovered." As soon as you discover a node exists, mark it to prevent others from discovering it again.
Pitfall 3: Off-by-One Errors in Grid Traversal
Grid problems are a staple of LeetCode graph questions, and boundary checking is where many solutions fail. The consequences range from wrong answers to runtime errors.
⚠️ Common Mistake: Inclusive vs. Exclusive Bounds ⚠️
Consider a grid with dimensions rows × cols. The valid indices are:
- Rows:
0torows - 1(inclusive) - Columns:
0tocols - 1(inclusive)
Here's a bulletproof template for 4-directional grid traversal:
def is_valid(grid, row, col):
"""Check if coordinates are within bounds"""
rows, cols = len(grid), len(grid[0])
return 0 <= row < rows and 0 <= col < cols
def grid_dfs(grid, row, col, visited):
# Boundary check AND visited check
if not is_valid(grid, row, col) or (row, col) in visited:
return
# Additional condition checks (e.g., grid[row][col] == '1')
if grid[row][col] == '0':
return
visited.add((row, col))
# 4-directional movement: up, down, left, right
directions = [(-1, 0), (1, 0), (0, -1), (0, 1)]
for dr, dc in directions:
new_row, new_col = row + dr, col + dc
grid_dfs(grid, new_row, new_col, visited)
🎯 Key Principle: Always check bounds BEFORE accessing grid elements to avoid index out of bounds errors.
💡 Pro Tip: Store directions as a list of tuples at the top of your function. This makes the code cleaner and easier to debug. For 8-directional movement (including diagonals), use:
## 8 directions (including diagonals)
directions = [(-1,-1), (-1,0), (-1,1), (0,-1), (0,1), (1,-1), (1,0), (1,1)]
Common off-by-one mistakes:
- Using
row <= rowsinstead ofrow < rows - Forgetting to check lower bounds (assuming indices can't be negative)
- Using wrong dimensions (checking against
len(grid[0])for rows)
❌ Wrong: if row >= 0 and row <= rows and col >= 0 and col <= cols:
✅ Correct: if 0 <= row < rows and 0 <= col < cols:
Pitfall 4: Stack Overflow in Recursive DFS
Recursive DFS is elegant and intuitive, but for large graphs (10,000+ nodes), you'll encounter stack overflow errors. Python's default recursion limit is around 1,000 calls.
⚠️ Common Mistake: Assuming recursion always works ⚠️
🤔 Did you know? While you can increase Python's recursion limit with sys.setrecursionlimit(), this is considered a code smell in interviews and doesn't solve the underlying problem—it just delays the crash.
The solution: Convert recursive DFS to iterative DFS using an explicit stack.
Here's a side-by-side comparison:
## Recursive DFS - Risk of stack overflow
def dfs_recursive(graph, node, visited):
if node in visited:
return
visited.add(node)
print(node) # Process node
for neighbor in graph[node]:
dfs_recursive(graph, neighbor, visited)
## Iterative DFS - Safe for large graphs
def dfs_iterative(graph, start):
visited = set()
stack = [start] # Use a list as stack (LIFO)
while stack:
node = stack.pop() # Pop from end (LIFO behavior)
if node in visited:
continue
visited.add(node)
print(node) # Process node
# Add neighbors to stack
for neighbor in graph[node]:
if neighbor not in visited:
stack.append(neighbor)
return visited
Key differences:
| Aspect | Recursive | Iterative |
|---|---|---|
| 🧠 Memory | System call stack | Explicit stack in heap |
| 🎯 Safety | Limited by recursion depth | Limited by available memory |
| 📝 Code | More concise | More verbose |
| 🔧 Control | Less explicit control | More control over execution |
💡 Pro Tip: If the problem specifically requires processing nodes in a certain order (like post-order traversal), recursive DFS is often easier. For simple traversal where order doesn't matter, iterative is safer.
When to use each approach:
🧠 Use Recursive DFS when:
- The graph is guaranteed to be small (< 1000 nodes)
- You need clean, readable code for tree problems
- You're implementing backtracking algorithms
🔧 Use Iterative DFS when:
- Graph size is unknown or potentially large
- You're solving on LeetCode and unsure of test case sizes
- Memory efficiency is critical
Pitfall 5: Memory Optimization Pitfalls
Memory Limit Exceeded (MLE) is less common than TLE, but for massive graphs, memory optimization becomes critical.
In-Place Modification Trade-offs
For grid problems, you can often mark cells as visited by modifying them directly:
def numIslands(grid):
if not grid:
return 0
count = 0
rows, cols = len(grid), len(grid[0])
def dfs(r, c):
if r < 0 or r >= rows or c < 0 or c >= cols or grid[r][c] == '0':
return
grid[r][c] = '0' # Mark as visited by changing value
# Explore all 4 directions
dfs(r+1, c)
dfs(r-1, c)
dfs(r, c+1)
dfs(r, c-1)
for r in range(rows):
for c in range(cols):
if grid[r][c] == '1':
dfs(r, c)
count += 1
return count
✅ Benefits:
- Saves O(rows × cols) space
- No need for a separate visited set
- Cleaner code
❌ Drawbacks:
- Destroys original input data
- Not possible if you need original values later
- Can cause issues if the problem expects the input unchanged
💡 Pro Tip: Always check the problem statement. If it says "do not modify the input," you must use a separate visited structure. If it's silent, in-place modification is usually acceptable.
Space Complexity Awareness
Understand the space implications of your algorithm:
📋 Quick Reference Card: Space Complexity
| Component | BFS | DFS (Recursive) | DFS (Iterative) |
|---|---|---|---|
| 🎯 Main Structure | Queue: O(w) | Call stack: O(h) | Stack: O(h) |
| 🔒 Visited Set | O(V) | O(V) | O(V) |
| 📊 Total | O(V + w) | O(V + h) | O(V + h) |
Where:
- V = number of vertices
- w = maximum width (level with most nodes)
- h = maximum height (longest path)
🎯 Key Principle: For wide, shallow graphs, BFS can use more memory than DFS because the queue contains entire levels. For deep, narrow graphs, DFS can use more memory (especially recursive) due to the call stack depth.
Integer Encoding for State Compression
For problems tracking multiple states (position + some additional info), you can compress state into a single integer:
## Instead of visited = set() with tuples like (row, col, keys_collected)
## Use integer encoding:
def encode_state(row, col, keys, cols, max_keys):
"""Compress (row, col, keys) into single integer"""
return row * cols * (1 << max_keys) + col * (1 << max_keys) + keys
def decode_state(state, cols, max_keys):
"""Decompress integer back to (row, col, keys)"""
keys = state % (1 << max_keys)
state //= (1 << max_keys)
col = state % cols
row = state // cols
return row, col, keys
## Usage in BFS
visited = set()
state = encode_state(start_row, start_col, 0, cols, max_keys)
visited.add(state) # Store as integer instead of tuple
This technique can reduce memory by 2-3x for complex state problems, though it sacrifices some readability.
Pitfall 6: Bidirectional BFS Setup Mistakes
For shortest path problems, bidirectional BFS can dramatically reduce time complexity from O(bd) to O(b(d/2)), where b is branching factor and d is depth. However, it's easy to mess up the implementation.
⚠️ Common Mistake: Not alternating the search direction ⚠️
from collections import deque
def bidirectional_bfs(graph, start, end):
if start == end:
return 0
# Two frontiers: searching from start and end
front_visited = {start}
back_visited = {end}
front_queue = deque([start])
back_queue = deque([end])
steps = 0
while front_queue and back_queue:
steps += 1
# IMPORTANT: Always expand the smaller frontier
if len(front_queue) > len(back_queue):
front_queue, back_queue = back_queue, front_queue
front_visited, back_visited = back_visited, front_visited
# Expand current level of smaller frontier
for _ in range(len(front_queue)):
node = front_queue.popleft()
for neighbor in graph[node]:
# Check if we've connected the two searches
if neighbor in back_visited:
return steps
if neighbor not in front_visited:
front_visited.add(neighbor)
front_queue.append(neighbor)
return -1 # No path found
🎯 Key Principle: Always expand the smaller frontier to minimize the branching factor. This is what gives bidirectional BFS its speed advantage.
Defensive Coding Checklist
Before submitting your graph algorithm solution, run through this checklist:
✅ Pre-Submission Checklist:
🔒 Visited Tracking:
- Do I mark nodes as visited?
- Am I marking at the right time (discovery for BFS)?
- Do I check visited before processing?
🔒 Boundary Checks:
- Do I validate grid coordinates before accessing?
- Are my bounds exclusive of length (< n, not <= n)?
- Do I check both upper AND lower bounds?
🔒 Recursion Safety:
- Could the graph have > 1000 nodes?
- Should I use iterative instead of recursive DFS?
- Have I tested with deep graphs?
🔒 Memory Efficiency:
- Can I use in-place modification?
- Is my visited structure appropriate for the problem size?
- Do I need to store parent pointers or just track existence?
🔒 Edge Cases:
- What if the graph is empty?
- What if start == end?
- What if there's no path?
- What about disconnected components?
Debugging Strategies
When your solution fails, use these techniques to identify the issue:
1. Visualize Small Test Cases
Manually trace through a 3×3 grid or 5-node graph on paper:
Step-by-step DFS trace:
Graph: 0-1-2
| |
3-4-5
Start at 0:
Visit 0 → visited = {0}
Visit 1 → visited = {0,1}
Visit 2 → visited = {0,1,2}
Visit 5 → visited = {0,1,2,5}
...
2. Add Strategic Print Statements
def dfs(node, visited):
print(f"Visiting {node}, visited so far: {visited}") # Debug line
if node in visited:
print(f"Already visited {node}, returning") # Debug line
return
visited.add(node)
# ... rest of code
3. Verify Visited Set Contents
After traversal, check if the visited set makes sense:
visited = dfs(graph, start)
print(f"Visited {len(visited)} nodes: {visited}")
print(f"Expected: {len(graph)} nodes total")
💡 Remember: The most common bugs are the simplest ones—missing a visited check, wrong boundary condition, or marking visited at the wrong time. Master these fundamentals, and you'll avoid 90% of graph algorithm bugs.
Summary: The Path to Bug-Free Graph Code
Mastering graph algorithms isn't just about knowing DFS and BFS—it's about developing the discipline to avoid common pitfalls. The developers who succeed on LeetCode aren't necessarily smarter; they're more careful. They've internalized these patterns:
🧠 Mental Model: Every graph traversal has three critical components:
- Discovery (when you first encounter a node)
- Processing (when you examine a node's neighbors)
- Completion (when you've finished with a node)
Most bugs occur when these three phases get confused or handled in the wrong order.
🎯 The Golden Rules:
- Mark visited at discovery time, not processing time
- Check boundaries before array access, always
- Choose iterative over recursive for unknown graph sizes
- Test edge cases: empty inputs, single nodes, no paths
- When in doubt, prioritize correctness over cleverness
With these best practices internalized, you'll spend less time debugging and more time solving new problems. The goal isn't to never make mistakes—it's to recognize and fix them quickly, building robust solutions that handle edge cases gracefully and perform efficiently even on large inputs.
Next, we'll consolidate everything you've learned into a quick reference guide with decision trees and complexity cheat sheets to make algorithm selection effortless.
Summary and Quick Reference Guide
Congratulations! You've now mastered the fundamental graph algorithms that form the backbone of countless LeetCode problems. Before diving into this lesson, graph traversal might have seemed like an intimidating black box. Now you understand that DFS and BFS are simply two complementary strategies for systematically exploring connected data structures—each with distinct characteristics that make them ideal for different problem types.
Let's consolidate everything you've learned into a practical reference guide you can return to whenever you encounter a graph problem.
🎯 What You've Accomplished
You now have a clear mental model for:
🧠 DFS (Depth-First Search): Going deep before going wide, perfect for exhaustive exploration, cycle detection, and path finding when any path will do
🧠 BFS (Breadth-First Search): Exploring level by level, ideal for shortest paths in unweighted graphs and when you need to process nodes by distance from the source
🧠 Pattern Recognition: Identifying which algorithm to apply based on problem keywords and requirements
🧠 Implementation Strategies: Both recursive and iterative approaches, with their tradeoffs in readability versus stack overflow risk
Decision Framework: DFS vs BFS
The most critical skill is knowing when to use which algorithm. Here's a practical decision tree:
Graph Problem Encountered
|
|
Does the problem mention "shortest"?
/ \
YES NO
| |
Use BFS |
|
Do you need to explore all paths?
/ \
YES NO
| |
Use DFS Use DFS
(usually)
🎯 Key Principle: The BFS Shortest Path Guarantee
In unweighted graphs, BFS always finds the shortest path first because it explores nodes in order of their distance from the source. DFS has no such guarantee—it might stumble upon the longest path before finding the shortest.
💡 Pro Tip: Look for these BFS trigger words in problem descriptions:
- "shortest path"
- "minimum number of steps"
- "closest" or "nearest"
- "level-order"
- "minimum depth"
💡 Pro Tip: Look for these DFS trigger words:
- "all possible paths"
- "detect cycle"
- "connected components"
- "path exists"
- "topological sort"
- "validate" (trees, graphs)
📋 Time and Space Complexity Quick Reference
Here's your complexity cheat sheet for graph algorithms:
| Algorithm | Time Complexity | Space Complexity | Notes |
|---|---|---|---|
| 🔍 DFS (Recursive) | O(V + E) | O(V) | Space is for recursion stack + visited set |
| 🔍 DFS (Iterative) | O(V + E) | O(V) | Space is for explicit stack + visited set |
| 🔍 BFS | O(V + E) | O(V) | Space is for queue + visited set |
| 🎯 DFS on Matrix | O(rows × cols) | O(rows × cols) | Worst case: all cells visited |
| 🎯 BFS on Matrix | O(rows × cols) | O(min(rows, cols)) | Queue size bounded by diagonal |
| 🔄 Cycle Detection | O(V + E) | O(V) | DFS with 3-color scheme |
| 🌳 Tree Traversal | O(n) | O(h) | h = height; worst case h = n |
Where:
- V = number of vertices (nodes)
- E = number of edges (connections)
- n = number of nodes in tree
- h = height of tree
⚠️ Common Mistake: Forgetting that in adjacency matrix representation, checking all edges takes O(V²) time, not O(V + E). Always clarify the graph representation! ⚠️
🤔 Did you know? The O(V + E) time complexity might seem confusing at first—why addition? It's because we visit each vertex once (V) and examine each edge once (E). In dense graphs where E ≈ V², this becomes O(V²), but in sparse graphs where E ≈ V, it's essentially O(V).
Code Template Cheat Sheet
Keep these templates handy—they form the foundation for 80% of graph problems you'll encounter.
🔧 DFS Template (Recursive)
def dfs_recursive(graph, node, visited):
"""
Classic recursive DFS template.
Works for: cycle detection, path finding, connected components
"""
# Base case: already visited
if node in visited:
return
# Mark as visited
visited.add(node)
# Process current node (problem-specific logic here)
# print(node), collect results, check conditions, etc.
# Recursively visit all neighbors
for neighbor in graph[node]:
dfs_recursive(graph, neighbor, visited)
# Optional: backtracking step (remove from visited if needed)
# visited.remove(node) # Uncomment for backtracking problems
## Usage example:
graph = {
0: [1, 2],
1: [3, 4],
2: [5],
3: [],
4: [],
5: []
}
visited = set()
dfs_recursive(graph, 0, visited)
💡 Remember: The backtracking step (uncommenting the visited.remove()) is crucial for problems where you need to explore ALL paths, like finding all paths from source to destination.
🔧 DFS Template (Iterative)
def dfs_iterative(graph, start):
"""
Iterative DFS using explicit stack.
Works for: same use cases as recursive, but avoids stack overflow
"""
visited = set()
stack = [start]
while stack:
node = stack.pop() # LIFO: Last In, First Out
if node in visited:
continue
visited.add(node)
# Process current node
# Your problem-specific logic here
# Add neighbors to stack (reverse order for same traversal as recursive)
for neighbor in reversed(graph[node]):
if neighbor not in visited:
stack.append(neighbor)
return visited
## For matrix-based graphs (like grid problems):
def dfs_grid(grid):
"""
DFS on 2D grid - very common in LeetCode
"""
if not grid:
return
rows, cols = len(grid), len(grid[0])
visited = set()
def dfs(r, c):
# Boundary checks and visited check
if (r < 0 or r >= rows or c < 0 or c >= cols or
(r, c) in visited or grid[r][c] == 0): # 0 might represent obstacle
return
visited.add((r, c))
# Explore 4 directions: up, down, left, right
directions = [(0, 1), (0, -1), (1, 0), (-1, 0)]
for dr, dc in directions:
dfs(r + dr, c + dc)
# Start DFS from (0, 0) or iterate through all cells
dfs(0, 0)
🔧 BFS Template
from collections import deque
def bfs(graph, start):
"""
Classic BFS template with level tracking.
Works for: shortest path, minimum steps, level-order traversal
"""
visited = set([start])
queue = deque([start])
level = 0 # Track distance/level if needed
while queue:
# Process entire level at once (optional but useful)
level_size = len(queue)
for _ in range(level_size):
node = queue.popleft() # FIFO: First In, First Out
# Process current node
# If this is your target, return level for shortest path
# Add unvisited neighbors to queue
for neighbor in graph[node]:
if neighbor not in visited:
visited.add(neighbor)
queue.append(neighbor)
level += 1
return level # or visited, depending on problem
## For grid problems:
def bfs_grid(grid, start_r, start_c):
"""
BFS on 2D grid - finds shortest path
"""
rows, cols = len(grid), len(grid[0])
visited = set([(start_r, start_c)])
queue = deque([(start_r, start_c, 0)]) # (row, col, distance)
directions = [(0, 1), (0, -1), (1, 0), (-1, 0)]
while queue:
r, c, dist = queue.popleft()
# Check if we reached target
# if grid[r][c] == target: return dist
for dr, dc in directions:
nr, nc = r + dr, c + dc
if (0 <= nr < rows and 0 <= nc < cols and
(nr, nc) not in visited and grid[nr][nc] != 0):
visited.add((nr, nc))
queue.append((nr, nc, dist + 1))
return -1 # Target not found
🧠 Pattern Recognition Guide
Here's how to quickly identify which algorithm and pattern to use:
| Problem Type | Algorithm | Key Pattern | Example Problems |
|---|---|---|---|
| 🎯 Shortest Path (Unweighted) | BFS | Simple BFS with level tracking | Shortest Path in Binary Matrix, Word Ladder |
| 🎯 Connected Components | DFS or BFS | Run search from each unvisited node | Number of Islands, Friend Circles |
| 🎯 Cycle Detection (Undirected) | DFS | Track parent during DFS | Graph Valid Tree |
| 🎯 Cycle Detection (Directed) | DFS | 3-color scheme (white/gray/black) | Course Schedule |
| 🎯 Path Exists | DFS or BFS | Simple traversal with target check | Find if Path Exists in Graph |
| 🎯 All Paths | DFS with Backtracking | Remove from visited after exploring | All Paths From Source to Target |
| 🎯 Topological Sort | DFS or BFS (Kahn's) | Post-order DFS or in-degree BFS | Course Schedule II |
| 🎯 Island Problems | DFS on Grid | Flood fill pattern | Max Area of Island, Surrounded Regions |
| 🎯 Level-Order | BFS | Process queue level by level | Binary Tree Level Order Traversal |
| 🎯 Minimum Steps | BFS | BFS with step counter | Minimum Knight Moves |
📚 Curated Practice Problems
Here's your roadmap for mastering graph algorithms through deliberate practice. Start with Easy, ensure you can solve them confidently, then progress.
✅ Easy (Build Foundation)
[LeetCode 733] Flood Fill
- Tags:
DFS,Grid,Flood Fill - Pattern: Classic DFS on grid
- Why start here: Simplest possible graph problem, builds intuition
- Tags:
[LeetCode 200] Number of Islands
- Tags:
DFS,BFS,Grid,Connected Components - Pattern: Count connected components in grid
- Why important: Foundation for countless variations
- Tags:
[LeetCode 695] Max Area of Island
- Tags:
DFS,Grid - Pattern: DFS with accumulation
- Builds on: Number of Islands
- Tags:
[LeetCode 1971] Find if Path Exists in Graph
- Tags:
DFS,BFS,Graph Representation - Pattern: Simple path existence check
- Why important: Reinforces basic traversal
- Tags:
[LeetCode 463] Island Perimeter
- Tags:
DFS,Grid,Counting - Pattern: DFS with boundary detection
- Twist: Don't need visited set
- Tags:
🟨 Medium (Core Skills)
[LeetCode 994] Rotting Oranges
- Tags:
BFS,Multi-source,Grid - Pattern: Multi-source BFS with time tracking
- Why important: Introduces multi-source concept
- Tags:
[LeetCode 207] Course Schedule
- Tags:
DFS,Cycle Detection,Directed Graph - Pattern: Cycle detection in directed graph
- Why important: Classic interview problem, teaches 3-color scheme
- Tags:
[LeetCode 210] Course Schedule II
- Tags:
DFS,BFS,Topological Sort - Pattern: Topological sort with DFS or Kahn's algorithm
- Builds on: Course Schedule
- Tags:
[LeetCode 133] Clone Graph
- Tags:
DFS,BFS,Graph Construction - Pattern: Traversal with node copying
- Why important: Tests deep understanding of graph structure
- Tags:
[LeetCode 417] Pacific Atlantic Water Flow
- Tags:
DFS,Grid,Reverse Thinking - Pattern: Two DFS searches, find intersection
- Why important: Teaches reverse search strategy
- Tags:
[LeetCode 785] Is Graph Bipartite?
- Tags:
BFS,DFS,Graph Coloring - Pattern: 2-coloring with BFS/DFS
- Why important: Classic graph theory problem
- Tags:
[LeetCode 130] Surrounded Regions
- Tags:
DFS,Grid,Boundary DFS - Pattern: DFS from boundary inward
- Twist: Requires reverse thinking
- Tags:
[LeetCode 797] All Paths From Source to Target
- Tags:
DFS,Backtracking - Pattern: DFS with backtracking for all paths
- Why important: Master backtracking in graphs
- Tags:
[LeetCode 1091] Shortest Path in Binary Matrix
- Tags:
BFS,Grid,Shortest Path - Pattern: BFS with 8-directional movement
- Why important: Classic shortest path application
- Tags:
🔴 Hard (Challenge Yourself)
[LeetCode 127] Word Ladder
- Tags:
BFS,String,Shortest Path - Pattern: BFS on implicit graph
- Why important: Graph isn't given explicitly
- Tags:
[LeetCode 126] Word Ladder II
- Tags:
BFS,DFS,Backtracking - Pattern: BFS to find distance, DFS to construct paths
- Difficulty spike: Combines multiple techniques
- Tags:
[LeetCode 1162] As Far from Land as Possible
- Tags:
BFS,Multi-source,Grid - Pattern: Multi-source BFS for maximum distance
- Builds on: Rotting Oranges
- Tags:
[LeetCode 815] Bus Routes
- Tags:
BFS,Graph Modeling - Pattern: BFS on meta-graph (routes as nodes)
- Why important: Tests abstraction skills
- Tags:
[LeetCode 329] Longest Increasing Path in a Matrix
- Tags:
DFS,Memoization,DP - Pattern: DFS with memoization
- Why important: Introduces optimization over naive DFS
- Tags:
[LeetCode 1293] Shortest Path in Grid with Obstacles Elimination
- Tags:
BFS,Grid,State Tracking - Pattern: BFS with expanded state (position + remaining eliminations)
- Why hard: 3D visited array needed
- Tags:
💡 Pro Tip: Don't just solve these problems once. The real learning happens when you:
- ✅ Solve it without looking at solutions
- ✅ Analyze your solution's complexity
- ✅ Review optimal solutions and understand why they work
- ✅ Solve it again 1 week later to cement the pattern
- ✅ Explain it to someone else (or write it out)
⚠️ Critical Points to Remember
⚠️ Always mark nodes as visited WHEN YOU ADD THEM to the queue/stack, not when you pop them. Otherwise, you'll add the same node multiple times, causing exponential time complexity.
⚠️ For grid problems, always check boundaries before accessing grid[r][c]. Use the pattern: if 0 <= r < rows and 0 <= c < cols
⚠️ BFS only guarantees shortest path in unweighted graphs. If edges have weights, you need Dijkstra's algorithm.
⚠️ In DFS backtracking problems (finding all paths), you must remove nodes from visited after exploring. Otherwise, you'll miss valid paths.
⚠️ Watch out for directed vs undirected graphs. Cycle detection logic differs significantly:
- Undirected: cycle exists if you visit a node that's not your parent
- Directed: cycle exists if you visit a node currently in your recursion stack (gray node)
🚀 Next Steps: Advanced Graph Algorithms
You've mastered the fundamentals. Here's your roadmap for advanced topics:
1️⃣ Dijkstra's Algorithm (Shortest Path in Weighted Graphs)
When BFS isn't enough because edges have different weights:
import heapq
def dijkstra(graph, start):
"""
Finds shortest path in weighted graph.
Time: O((V + E) log V) with min-heap
"""
distances = {node: float('inf') for node in graph}
distances[start] = 0
pq = [(0, start)] # (distance, node)
while pq:
curr_dist, node = heapq.heappop(pq)
if curr_dist > distances[node]:
continue # Already found better path
for neighbor, weight in graph[node]:
distance = curr_dist + weight
if distance < distances[neighbor]:
distances[neighbor] = distance
heapq.heappush(pq, (distance, neighbor))
return distances
Practice with: [LeetCode 743] Network Delay Time, [LeetCode 787] Cheapest Flights Within K Stops
2️⃣ Union-Find (Disjoint Set Union)
Perfect for dynamic connectivity and detecting cycles efficiently:
- Use cases: Kruskal's MST, dynamic connectivity, detecting cycles in undirected graphs
- Time complexity: Nearly O(1) per operation with path compression and union by rank
- Key insight: Groups elements into sets and can quickly answer "are these two elements connected?"
Practice with: [LeetCode 547] Number of Provinces, [LeetCode 684] Redundant Connection, [LeetCode 200] Number of Islands (alternative approach)
3️⃣ Minimum Spanning Tree (MST)
Finding the minimum cost to connect all nodes:
- Kruskal's Algorithm: Sort edges, use Union-Find to avoid cycles
- Prim's Algorithm: Similar to Dijkstra, grow tree from starting node
Practice with: [LeetCode 1584] Min Cost to Connect All Points, [LeetCode 1135] Connecting Cities With Minimum Cost
4️⃣ Advanced Patterns
🎯 Bidirectional BFS: Start from both source and target, meet in the middle (huge speedup for large graphs)
🎯 A Search:* BFS with heuristic to guide search toward target
🎯 Tarjan's Algorithm: Finding strongly connected components in directed graphs
🎯 Floyd-Warshall: All-pairs shortest path in O(V³)
💡 Real-World Applications
Understand the practical impact of what you've learned:
🌐 Social Networks: Finding degrees of separation (BFS), detecting communities (connected components), friend recommendations (graph distance)
🗺️ Maps & Navigation: GPS routing (Dijkstra's), finding nearby locations (BFS), traffic optimization (minimum spanning trees)
🔐 Dependency Resolution: Package managers use topological sort to determine installation order, build systems use it for compilation order
🎮 Game Development: Pathfinding for AI (A*), procedural generation (graph algorithms for dungeon layouts), collision detection (spatial graphs)
🧬 Computational Biology: Analyzing protein interactions, genome assembly, phylogenetic trees
Final Summary Table
| 📊 Aspect | DFS | BFS |
|---|---|---|
| Data Structure | Stack (implicit via recursion or explicit) | Queue |
| Exploration Strategy | Go deep, then backtrack | Level by level |
| Shortest Path | ❌ No guarantee | ✅ Yes (unweighted) |
| Space Complexity | O(h) best, O(V) worst | O(w) where w is max width |
| Best For | Cycle detection, topological sort, path existence, backtracking | Shortest path, level-order, minimum steps |
| Implementation | Usually simpler with recursion | Requires explicit queue |
| Stack Overflow Risk | ⚠️ Yes (if recursive) | ❌ No |
🎓 You're Now Ready
You've journeyed from graph fundamentals to mastery. You can:
✅ Recognize graph problems instantly from problem descriptions
✅ Choose between DFS and BFS based on problem requirements
✅ Implement both algorithms confidently in multiple contexts (graphs, trees, grids)
✅ Avoid common pitfalls that cause TLE and wrong answers
✅ Tackle the majority of graph problems in technical interviews
The path to mastery is through practice. Start with the Easy problems, build confidence, and progressively challenge yourself. Remember: every expert was once a beginner who didn't give up.
💡 Remember: The goal isn't to memorize solutions—it's to internalize patterns. When you see a new problem, you should instinctively think "This looks like connected components" or "This needs shortest path, so BFS." That pattern recognition comes from deliberate practice.
Happy coding, and good luck on your LeetCode journey! 🚀