Algorithm Patterns
Four reusable skeletons power almost every interview problem.
Learn the pattern once — recognize it instantly — apply it everywhere.
Click the + lines in code blocks to reveal deeper context.
Which Pattern Do I Need?
Master Decision Flowchart
Start here. Ask these questions about every new problem:
New Problem
→
Need O(1) lookup?
YES→
Hash Map / Set
Linear scan with conditions?
YES→
Two Pointers / Window
Tree / graph level-by-level?
YES→
BFS + Queue
Bottom-up value / explore all?
YES→
DFS / Recursion
Signal Phrases → Pattern
When you see these words in a problem statement, reach for the corresponding pattern:
| You hear… | You think… | You need… |
| "find pair", "duplicate", "frequency", "group by" |
O(1) lookup or counting |
Hash Map |
| "sorted array", "palindrome", "subarray sum", "substring" |
Converge or slide through data |
Two Pointers |
| "level order", "shortest path", "flood fill", "components" |
Explore neighbors layer by layer |
BFS |
| "max depth", "validate tree", "all paths", "combinations" |
Recurse, aggregate, or backtrack |
DFS |
Pattern 1: Hash Map / Frequency Counter
When to use O(1) lookup
Whenever you need to check existence, count occurrences, or group items without nested loops.
find complement
count frequency
detect duplicate
group by key
two values that sum to
Data structures:
dict
set
defaultdict
Counter
Core Skeleton
def solve(nums, target):
Step 1 — Identify the problem type.
Are you searching for a pair, counting, or grouping? If you need to turn O(n²) into O(n), a hash map is almost certainly the answer.
seen = {} # val → index (or count)
Step 2 — Choose your map type.
dict — complement lookup (Two Sum)
set — existence check (Contains Duplicate)
Counter — frequency counting (Valid Anagram)
defaultdict(list) — grouping (Group Anagrams)
for i, num in enumerate(nums):
Step 3 — Single pass.
Process each element exactly once. The hash map gives you O(1) lookup for anything you've already seen, eliminating the inner loop.
complement = target - num
The key insight.
Instead of asking
"which other element pairs with this?" (O(n) scan), compute the complement and check the map (O(1)).
# Two Sum: complement = target - num
# Contains Dup: complement = num (just check set)
# Anagram: decrement count for each char in t
if complement in seen:
O(1) lookup — the whole point.
in dict is O(1) average. This replaces the O(n) inner loop of the brute-force approach.
return [seen[complement], i]
Found it — return early.
For Two Sum you return indices. For Contains Duplicate you return True. For frequency problems you continue counting.
seen[num] = i # build map as you go
Interview tip: Always build the map as you iterate, not in a separate pass.
This handles edge cases (like not pairing an element with itself) and keeps the code to one loop.
Variations
Complement Lookup
complement = target - num
if complement in seen:
Two Sum, Two Sum II
Frequency Count
count[c] += 1 # for s
count[c] -= 1 # for t
Valid Anagram, Top K
Group by Key
key = tuple(sorted(s))
groups[key].append(s)
Group Anagrams
Pattern 2: Two Pointers / Sliding Window
When to use linear scan
When the data is sorted or sequential and you need to find a pair, shrink/expand a range, or process a contiguous subarray.
sorted array
palindrome
subarray sum
substring
minimum window
container / area
Data structures:
left, right indices
set (window)
dict (freq)
Core Skeleton — Converging Pointers
def solve(nums, target):
Step 1 — Is the input sorted?
Converging pointers require sorted data. If it's not sorted, consider sorting first (O(n log n)) or using a hash map instead.
left, right = 0, len(nums) - 1
Step 2 — Place pointers at extremes.
Left starts at the beginning, right at the end. They converge toward each other, eliminating candidates based on the comparison.
while left < right:
Step 3 — The convergence loop.
Each iteration moves at least one pointer inward, guaranteeing O(n) time. The loop ends when the pointers meet.
total = nums[left] + nums[right]
Step 4 — Compute and compare.
The comparison result tells you which pointer to move:
total == target → found it
total < target → need larger sum → move left right
total > target → need smaller sum → move right left
if total == target: return [left, right]
elif total < target: left += 1
else: right -= 1
Interview tip: When asked "can you do better than O(n²)?", sorting + two pointers gives O(n log n). If input is already sorted, it's O(n).
Core Skeleton — Sliding Window
def solve(s):
Step 1 — Recognize the window pattern.
You need a contiguous subarray or substring that satisfies some condition. The window expands right and shrinks left.
left = 0; window = set() # or dict for freq
Step 2 — Choose your window state.
set — track unique chars (Longest Substring Without Repeating)
dict — track frequencies (Minimum Window Substring, Character Replacement)
int — running sum (Min Size Subarray Sum, Best Time to Buy)
for right in range(len(s)):
Step 3 — Expand right (always).
The right pointer advances every iteration, adding the new element to the window state.
window.add(s[right]) # expand
Add the new element to your window state. For frequency problems: freq[s[right]] += 1. For sum problems: total += nums[right].
while window_invalid: # shrink
Step 4 — Shrink left (conditionally).
When the window violates the constraint, remove from the left until valid again. This inner while loop is still O(n) total because left only moves right.
# Longest Substring: while s[right] in window
# Min Subarray Sum: while total >= target
# Min Window: while have == need
window.remove(s[left]); left += 1
result = max(result, right - left + 1)
Interview tip: The expand-right / shrink-left pattern is O(n) even with the inner while loop, because each element enters and leaves the window at most once.
Variations
Converging
L, R = 0, n-1
while L < R:
Two Sum II, Valid Palindrome, 3Sum, Container
Fast / Slow
slow = fast = head
fast = fast.next.next
Cycle Detection, Middle Node, Reorder List
Sliding Window
for R in range(n):
while invalid: L++
Longest Substring, Min Window, Best Time
Pattern 3: BFS (Breadth-First Search)
When to use queue
When you need to explore level by level, find the shortest path, or process all nodes at the same distance before moving deeper.
level order
shortest path
minimum depth
flood fill
connected components
nearest neighbor
Data structures:
deque (queue)
visited set
level list
Core Skeleton — Tree BFS
from collections import deque
Always use deque, never a list.
list.pop(0) is O(n) — it shifts every element. deque.popleft() is O(1). This matters when the queue holds thousands of nodes.
def bfs(root):
Step 1 — Is this a BFS problem?
Ask: "Do I need to process nodes layer by layer?" If yes, BFS. If you need a bottom-up aggregate (depth, diameter), prefer DFS.
if not root: return []
Step 2 — Guard the empty case.
Every tree solution starts with a null check. For BFS, return an empty result ([], 0, False — depends on the problem).
queue = deque([root]); result = []
Step 3 — Seed the queue.
Start with the root. For graph problems, seed with the starting node and mark it visited. For multi-source BFS (e.g., all land cells), seed with all starting points.
while queue:
Step 4 — The BFS loop.
Process until the queue is empty. Each iteration of the outer while loop handles one level. The inner for loop processes all nodes at that level.
level_size = len(queue) # snapshot current level
The level-size trick.
Capture len(queue) before the inner loop. New children added during the loop belong to the next level. This is what separates level-order BFS from a simple queue drain.
current_level = []
for _ in range(level_size):
node = queue.popleft()
current_level.append(node.val)
if node.left: queue.append(node.left)
if node.right: queue.append(node.right)
result.append(current_level)
return result
Interview tip: The level-size snapshot (len(queue)) is the single line that separates "I know BFS" from "I understand BFS." Always mention it.
Core Skeleton — Graph BFS (Flood Fill)
def bfs_grid(grid, sr, sc):
Graph BFS vs Tree BFS.
Trees have no cycles, so you never revisit. Grids/graphs can have cycles, so you need a visited set (or mark cells in-place).
queue = deque([(sr, sc)]); visited = {(sr, sc)}
Mark visited on enqueue, not on dequeue.
If you wait until dequeue to mark, the same cell can be added to the queue multiple times by different neighbors, causing O(n²) behavior.
while queue:
r, c = queue.popleft()
for dr, dc in [(-1,0),(1,0),(0,-1),(0,1)]:
nr, nc = r + dr, c + dc
if 0 <= nr < rows and 0 <= nc < cols \
and (nr,nc) not in visited:
visited.add((nr, nc))
queue.append((nr, nc))
Interview tip: The 4-direction array [(-1,0),(1,0),(0,-1),(0,1)] is a reusable constant. Name it DIRS for clarity.
Pattern 4: DFS / Recursion
When to use recursion
When you need to explore to the deepest point first, compute a bottom-up value from subtrees, or explore all possible paths.
max depth
validate tree
invert / mirror
all paths
bottom-up aggregate
memoization
Data structures:
call stack (implicit)
explicit stack
memo dict
Core Skeleton — Post-Order Aggregation
def solve(root):
Step 1 — Identify the aggregation.
You're computing a value that depends on both subtrees. The answer at each node combines the answers from its children. This is post-order: left → right → combine.
if not root: return 0 # base case
Step 2 — The null base case.
Every recursive tree solution starts here. The return value depends on the problem:
- Depth problems →
return 0
- Boolean problems →
return True
- Collection problems →
return []
left = solve(root.left)
Step 3 — Recurse into subtrees.
Trust that the recursive call returns the correct answer for the subtree. Don't try to trace every call — focus on one node and assume children are solved.
right = solve(root.right)
return 1 + max(left, right) # combine
Step 4 — Combine results.
This one line is the only thing that changes between problems:
# Max Depth: return 1 + max(left, right)
# Min Depth: return 1 + min(left, right) + guards
# Same Tree: return p.val == q.val and left and right
# Invert Tree: swap children, return root
Interview tip: Say "I'll use post-order because the answer at each node depends on both subtrees." This shows you understand why, not just how.
Core Skeleton — DFS with Bounds (Validation)
def isValid(node, lo, hi):
Passing constraints down the tree.
Unlike post-order (bottom-up), validation passes bounds top-down. Each node must satisfy lo < node.val < hi.
Initial call: isValid(root, -inf, inf).
if not node: return True
Empty subtree is always valid. This is the base case that stops the recursion.
if node.val <= lo or node.val >= hi:
The constraint check.
If the current value violates the bounds, the entire subtree is invalid. Return False immediately.
return False
return isValid(node.left, lo, node.val) \
and isValid(node.right, node.val, hi)
Interview tip: The bounds approach avoids the common mistake of only checking parent-child relationships. A node must be valid against all ancestors, not just its parent.
Core Skeleton — DP with Memoization
def solve(n, memo={}):
When DFS meets overlapping subproblems.
If the same recursive call happens multiple times with the same arguments, cache the result. This turns exponential O(2^n) into linear O(n).
if n in memo: return memo[n]
Check cache first.
Before computing, check if you've already solved this subproblem. This is the memoization guard — it prevents recomputation.
if n <= 1: return 1 # base case
Base cases stop the recursion.
For Climbing Stairs: ways(0) = 1, ways(1) = 1. For other DP problems, identify the smallest subproblem that has a known answer.
memo[n] = solve(n-1) + solve(n-2) # recurrence
return memo[n]
Interview tip: Start recursive, then add memo, then offer to convert to iterative DP. This progression shows depth: "I can also do this bottom-up with O(1) space using two variables."
AlgoVision — Algorithm Patterns • See the code. Trace the logic. Ace the interview.