DSA Blog Series - Part 5: Advanced DP & Problem-Solving Patterns
- Maximum path sum that goes through this node as the "peak"
- Maximum path sum that extends from this node upward (for parent to use)
- rob[node] = max profit if we rob this node
- notRob[node] = max profit if we don't rob this node
- rob[node] = node.val + notRob[left] + notRob[right]
- notRob[node] = max(rob[left], notRob[left]) + max(rob[right], notRob[right])
- Define what information each node needs from its children
- Process tree recursively (usually post-order)
- Combine children's results at each node
- Return information for parent to use
- File system optimisation (cache placement)
- Network design (server placement)
- Organisational hierarchy problems
- Compiler optimisation (expression trees)
- Draw small examples to visualise state flow
- Clearly separate what a node returns vs what it computes internally
- Handle null nodes carefully (base case)
- Consider whether you need one or multiple states per node
- Reduced memory usage (crucial for large inputs)
- Better cache performance
- Shows optimisation skills in interviews
- Harder to debug (can't inspect full table)
- Can't easily reconstruct solution (need to store additional info)
- More complex code
- In interviews: Mention the optimisation, implement if time permits
- Large inputs: Memory constraints force optimisation
- Production code: When profiling shows memory is a bottleneck
- Implement working 2D solution first
- Identify dependencies—what previous states does each state need?
- Keep only necessary previous states
- Test thoroughly (space optimisation introduces subtle bugs)
- Overwriting values still needed
- Wrong iteration order
- Forgetting to handle edge cases
- Use left and right pointers at ends
- Move pointer with smaller height inward
- Track maximum area
- Sort array
- For each element, use two pointers on remaining array
- O(n²) instead of O(n³)
- Slow pointer moves one step, fast moves two
- If they meet, cycle exists
- Sorted or partially sorted data
- Array/string problems with linear conditions
- Need to avoid O(n²) brute force
- Problems involving pairs, triplets, or subarrays
- Expand window until valid, then contract to minimise
- Track character frequencies
- Expand while sum ≤ K
- Contract when sum exceeds K
- Window with at most 2 distinct elements
- Fixed window size = len(s1)
- Compare character frequencies
- Problem involves contiguous sequence (subarray/substring)
- Need to find optimal subarray (max/min length, max/min sum)
- Constraint can be maintained incrementally
- Brute force is O(n²) or worse
- Fixed size: Window size is given or implied
- Variable size: Find optimal window size meeting constraints
- Reduces complexity by an entire dimension
- Natural and intuitive once you see the pattern
- Appears in 20%+ of array/string problems
- Identify if problem involves contiguous sequences
- Determine if window size is fixed or variable
- Define what the window tracks (sum, frequencies, etc.)
- Write expand and contract conditions
- prefix = [0, 3, 4, 8, 9, 14]
- sum(1, 3) = prefix[4] - prefix[1] = 9 - 3 = 6
- Verify: arr[1] + arr[2] + arr[3] = 1 + 4 + 1 = 6 ✓
- Preprocessing: O(n)
- Each query: O(1)
- Total for k queries: O(n + k) instead of O(n·k)
- If prefix[j] - prefix[i] = K, then subarray [i, j) has sum K
- Rearrange: prefix[j] = prefix[i] + K
- For each position j, count how many positions i have prefix[i] = prefix[j] - K
- XOR has property: a ⊕ a = 0
- xor(L, R) = prefixXOR[R+1] ⊕ prefixXOR[L]
- Useful for range update queries
- Add value to range [L, R] in O(1)
- Multiple range sum queries
- Subarray sum problems
- Need to avoid recalculating sums repeatedly
- Off-by-one errors with indices
- Forgetting to initialise prefix[0] = 0
- Not handling negative numbers correctly in hash map approach
- Use monotonic stack to find previous/next smaller elements
- Rectangle width = right - left - 1
- Rectangle area = height × width
- Use monotonic stack to find "containers"
- When current bar > stack top, we have a container
- Use monotonic deque (double-ended queue)
- Maintain decreasing order
- Need to find next/previous greater/smaller element
- Need to maintain order while processing sequence
- Histogram or bar-related problems
- See keywords: "next", "previous", "greater", "smaller"
- Converts O(n²) to O(n)—each element pushed and popped at most once
- Elegant solution to otherwise complex problems
- Common in interview and competitive programming
- Split array into two halves
- Generate all subset sums for each half
- For each sum S1 in first half, check if (T - S1) exists in second half
- First half [1, 2]: sums = {0, 1, 2, 3}
- Second half [3, 4]: sums = {0, 3, 4, 7}
- For S1 = 2, check if 5-2=3 exists in second → Yes! ✓
- Generate sums: O(2^(n/2))
- Sort: O(2^(n/2) log(2^(n/2)))
- Search: O(2^(n/2) log(2^(n/2)))
- Total: O(2^(n/2) · n)
- Generate and sort both halves
- Use two pointers to find closest pair
- Split into two pairs
- Generate all pair sums, find matching pairs
- Split items into two groups
- Generate all possible weights for each
- Combine optimally
- Problem has exponential complexity (typically 2^n)
- n is small-to-medium (typically 20 ≤ n ≤ 40)
- Problem can be split into independent subproblems
- Combining solutions is efficient
- Requires problem to be splittable
- Needs extra space for storing intermediate results
- Still exponential, just more feasible
- Use: Check if bit is set, clear bits, extract rightmost bit
- Use: Set bits, combine flags
- Use: Toggle bits, find unique elements, swap without temp
- Use: Get complement
- x << k = x × 2^k
- x >> k = x / 2^k (integer division)
- a ^ a = 0
- a ^ 0 = a
- XOR is commutative and associative
- Problems mentioning XOR, AND, OR
- Problems with 2^n states (subsets)
- Space optimisation (use bits as flags)
- Fast arithmetic operations
- Unique element problems
- Fast—operations are O(1)
- Space-efficient—pack multiple booleans into integers
- Sometimes the only feasible approach
- Can be hard to read/understand
- Easy to introduce subtle bugs
- Language-dependent behaviour (signed vs unsigned)
- Draw binary representations for small examples
- Know the fundamental operations cold
- XOR is your friend for finding unique elements
- Powers of 2 appear frequently
- Identify core operations needed
- Solve each operation independently
- Combine solutions
- Insert/delete: Hash map
- getRandom: Array for random access
- Solution: Combine both—array + hash map of value → index
- Understand: Restate problem, clarify constraints, try examples
- Match: Does this fit a known pattern? (two pointers, DP, graph, etc.)
- Plan: Outline approach before coding
- Implement: Write clean code with clear variable names
- Test: Walk through examples, consider edge cases
- Optimise: Can you improve time/space? Mention trade-offs
- Problem type
- Key insight that unlocked solution
- Mistakes made
- Similar problems
- "This is a graph shortest path problem"
- "This uses sliding window"
- "This needs dynamic programming"
- Two pointers
- Sliding window
- Prefix sum
- Binary search (if sorted)
- DFS/BFS traversal
- Tree DP
- Level-order processing
- Graph traversal (BFS/DFS)
- Shortest path
- Topological sort
- Union-Find
- DP (overlapping subproblems)
- Divide and conquer (independent subproblems)
- Sliding window (for substring)
- Hash map (to track character counts)
- Two pointers (window boundaries)
- Brute force first (clarifies problem)
- Identify bottleneck
- Apply pattern to optimise
- Why the pattern works
- When to apply it
- How to adapt it
- Week 1: Two pointers and sliding window
- Week 2: Binary search variants
- Week 3: Graph problems
- Week 4: Dynamic programming
- What pattern did it use?
- What was the key insight?
- What similar problems have I seen?
- Could I explain this to someone else?
DSA Blog Series - Part 5: Advanced DP & Problem-Solving Patterns
🧠Dynamic Programming (Continued)
41. DP on Trees
Tree DP combines tree traversal with dynamic programming, solving problems where the answer depends on subtree solutions. It's powerful for problems involving hierarchical structures.
Key insight: Process nodes in post-order (children before parents) so parent nodes can use children's DP values.
Classic problem—Maximum Path Sum:
Problem: Find the maximum path sum in a binary tree. A path can start and end at any node.
Approach: For each node, calculate:
State: For each node, track the best path extending downward from it.
Transition:
maxPathDown(node):
if node is null: return 0
left = max(0, maxPathDown(node.left)) // 0 if negative
right = max(0, maxPathDown(node.right))
// Path through this node (both children)
currentMax = node.val + left + right
globalMax = max(globalMax, currentMax)
// Return best path extending upward
return node.val + max(left, right)
Other tree DP problems:
1. House Robber III: Houses in a binary tree. Can't rob adjacent nodes. Maximum profit?
State: For each node, track two values:
Transition:
2. Diameter of Tree: Longest path between any two nodes.
3. Minimum cameras: Minimum cameras to monitor all nodes (cameras can see parent, self, children).
4. Tree coloring: Count ways to color tree nodes given constraints.
General pattern:
Applications:
Complexity: Typically O(n) time where n is number of nodes—each node processed once.
Tips for tree DP:
Tree DP elegantly handles problems where structure matters. Master it, and hierarchical problems become much more approachable.
42. Optimising Space in DP
Many DP problems use O(n²) or O(n·m) space, but often you only need a fraction of that. Space optimisation makes solutions more efficient and shows deep understanding.
Observation: In many DP problems, computing row i only requires row i-1, not all previous rows.
Example—Longest Common Subsequence (LCS):
Standard 2D approach:
dp[i][j] = LCS length of first i chars of s1, first j chars of s2
Space: O(n × m)
Optimized to 1D:
Since dp[i][j] only depends on dp[i-1][j], dp[i][j-1], and dp[i-1][j-1], we only need two rows:
prev = [0] * (m+1)
curr = [0] * (m+1)
for i in range(1, n+1):
for j in range(1, m+1):
if s1[i-1] == s2[j-1]:
curr[j] = prev[j-1] + 1
else:
curr[j] = max(prev[j], curr[j-1])
prev, curr = curr, [0] * (m+1)
Space: O(m)—reduced from O(n × m)!
Further optimisation: Since we only need prev[j-1], we can use a single array with careful iteration.
Fibonacci optimisation revisited:
O(n) space:
dp[i] = dp[i-1] + dp[i-2]
O(1) space:
a, b = 0, 1
for i in range(n):
a, b = b, a + b
Only track the last two values.
Optimisation patterns:
1. 2D → 1D (row by row): When dp[i] only depends on dp[i-1], use two arrays or alternate between them.
2. 1D → constant space: When only last k values matter, use k variables instead of an array.
3. Diagonal dependencies: Problems where dp[i][j] depends on dp[i-1][j-1] (like LCS) can sometimes be optimised with careful value preservation.
Trade-offs:
Advantages:
Disadvantages:
When to optimise:
Approach to space optimisation:
Common mistakes:
Space optimisation demonstrates mastery beyond basic DP. It's not always necessary, but knowing when and how to apply it sets you apart.
🧩 Problem-Solving Patterns (Very Blog-Friendly)
43. Two Pointers Technique
The Two Pointers technique uses two indices moving through data to solve problems in O(n) time that might otherwise require O(n²). It's elegant, efficient, and appears everywhere.
Core idea: Instead of nested loops, use two pointers that move intelligently based on conditions.
Common patterns:
1. Opposite direction pointers (start and end):
Problem: Check if a string is a palindrome.
Approach:
left = 0, right = len(s) - 1
while left < right:
if s[left] != s[right]:
return False
left++, right--
return True
2. Same direction pointers (slow and fast):
Problem: Remove duplicates from sorted array in-place.
Approach:
slow = 0
for fast in range(1, len(arr)):
if arr[fast] != arr[slow]:
slow++
arr[slow] = arr[fast]
return slow + 1
The slow pointer tracks position for unique elements, fast pointer scans ahead.
3. Multiple arrays (merge sorted arrays):
Problem: Merge two sorted arrays.
Approach:
i = 0, j = 0, result = []
while i < len(arr1) and j < len(arr2):
if arr1[i] < arr2[j]:
result.append(arr1[i])
i++
else:
result.append(arr2[j])
j++
// append remaining elements
Classic problems:
Container With Most Water: Given heights, find two lines that contain most water.
3Sum: Find all triplets that sum to zero.
Linked List Cycle Detection (Floyd's Algorithm):
When to use two pointers:
Why it works: Two pointers eliminate one dimension of iteration. Instead of checking all pairs (i, j) in nested loops O(n²), intelligent pointer movement checks each element once or a few times—O(n).
Interview tip: When you see a brute force O(n²) solution on sorted data, ask: "Can two pointers reduce this to O(n)?"
Two pointers is simple yet powerful. It's often the difference between TLE (Time Limit Exceeded) and Accepted.
44. Sliding Window Pattern
Sliding Window solves subarray/substring problems by maintaining a "window" that expands and contracts as it moves through data. It transforms O(n²) or O(n·k) solutions into O(n).
Core idea: Instead of recalculating for every subarray, maintain a window and update incrementally as it slides.
Fixed-size window:
Problem: Maximum sum of subarray of size k.
Naive approach: O(n·k)—calculate sum for each window.
Sliding window: O(n)
windowSum = sum(arr[:k])
maxSum = windowSum
for i in range(k, len(arr)):
windowSum = windowSum - arr[i-k] + arr[i] // slide
maxSum = max(maxSum, windowSum)
Variable-size window:
Problem: Longest substring without repeating characters.
Approach:
left = 0, maxLen = 0
charSet = {}
for right in range(len(s)):
while s[right] in charSet:
charSet.remove(s[left])
left++
charSet.add(s[right])
maxLen = max(maxLen, right - left + 1)
Window expands by moving right, contracts by moving left when constraint is violated.
Classic problems:
1. Minimum Window Substring: Smallest substring containing all characters of target.
2. Longest Subarray with Sum ≤ K:
3. Fruits into Baskets: Maximum fruits in two baskets (at most 2 types).
4. Permutation in String: Does s1's permutation exist as substring in s2?
Pattern recognition:
Use sliding window when:
Two types:
Template for variable window:
left = 0
for right in range(len(data)):
// Add data[right] to window
while window violates constraint:
// Remove data[left] from window
left++
// Update result using current window
Why it's powerful:
Interview approach:
Sliding Window is one of the highest-value patterns to master. It turns complex problems into elegant O(n) solutions.
45. Prefix Sum Technique
Prefix Sum preprocesses an array so that range sum queries become O(1). It's a simple but incredibly useful technique for problems involving subarrays.
Core idea: Store cumulative sums so any range sum can be calculated with subtraction.
Setup:
prefix[0] = 0
for i in range(1, n+1):
prefix[i] = prefix[i-1] + arr[i-1]
Now prefix[i] = sum of first i elements.
Range sum [L, R]:
sum(L, R) = prefix[R+1] - prefix[L]
Example: arr = [3, 1, 4, 1, 5]
Time complexity:
Applications:
1. Subarray Sum Equals K:
Problem: Count subarrays with sum = K.
Approach: Use prefix sums with hash map.
count = 0, prefixSum = 0
prefixCount = {0: 1} // empty subarray
for num in arr:
prefixSum += num
count += prefixCount.get(prefixSum - K, 0)
prefixCount[prefixSum] = prefixCount.get(prefixSum, 0) + 1
2. Range Sum Query 2D:
Extend to 2D matrices for rectangle sum queries.
3. Product of Array Except Self:
Similar concept with prefix products instead of sums.
Variations:
XOR prefix: For XOR range queries.
Difference array: Inverse of prefix sum.
When to use:
Why it works: Mathematics: sum(L, R) = sum(0, R) - sum(0, L-1). Prefix sum stores sum(0, i) for all i, making the subtraction trivial.
Common mistakes:
Prefix Sum is a fundamental preprocessing technique. It's not just about array sums—the principle of storing cumulative information applies to many problems. Master this pattern, and you'll solve subarray problems with ease.
46. Monotonic Stack
Monotonic Stack maintains elements in increasing or decreasing order, efficiently solving "next greater/smaller element" problems that would otherwise require O(n²) nested loops.
Core idea: Stack where elements are always in monotonic order (strictly increasing or decreasing). When a new element violates monotonicity, pop elements until order is restored.
Classic problem—Next Greater Element:
Problem: For each element, find the next element to the right that is greater.
Example: [2, 1, 2, 4, 3] → [4, 2, 4, -1, -1]
Naive approach: O(n²)—for each element, scan right until finding greater.
Monotonic stack approach: O(n)
result = [-1] * n
stack = [] // stores indices
for i in range(n):
while stack and arr[stack[-1]] < arr[i]:
idx = stack.pop()
result[idx] = arr[i] // arr[i] is next greater for arr[idx]
stack.append(i)
Why it works: When we encounter arr[i], all previous elements still in the stack are smaller. arr[i] is the next greater element for all of them.
Variations:
1. Next Smaller Element: Use stack in decreasing order instead.
2. Previous Greater Element: Iterate right to left instead of left to right.
3. Largest Rectangle in Histogram:
Problem: Find largest rectangle in histogram bars.
Approach: For each bar, find how far left and right it can extend (bars must be ≥ current height).
4. Trapping Rain Water:
Calculate water trapped between bars after rain.
Other applications:
Stock Span Problem: Days stock price ≤ today's price.
Sliding Window Maximum: Maximum in each window of size k.
Pattern recognition:
Use monotonic stack when:
Templates:
Next Greater (increasing stack):
stack = []
for i in range(n):
while stack and arr[stack[-1]] < arr[i]:
// arr[i] is next greater for arr[stack[-1]]
process(stack.pop(), i)
stack.append(i)
Next Smaller (decreasing stack):
stack = []
for i in range(n):
while stack and arr[stack[-1]] > arr[i]:
// arr[i] is next smaller for arr[stack[-1]]
process(stack.pop(), i)
stack.append(i)
Why it's powerful:
Monotonic Stack is a specialised but highly valuable technique. Once you recognise the pattern, certain problems become almost trivial.
47. Meet in the Middle
Meet in the Middle is an optimisation technique that reduces exponential complexity by splitting the problem into two halves, solving each independently, and combining results. It's like binary search applied to exponential problems.
Core idea: Instead of O(2^n), achieve O(2^(n/2)) by splitting into two subproblems of size n/2.
Classic problem—Subset Sum:
Problem: Given n numbers and target T, can any subset sum to T?
Naive approach: Check all 2^n subsets—O(2^n)
Meet in the middle: O(2^(n/2))
Example: arr = [1, 2, 3, 4], T = 5
Implementation:
def meetInMiddle(arr, target):
n = len(arr)
mid = n // 2
// Generate all sums for first half
leftSums = generateAllSums(arr[:mid])
rightSums = generateAllSums(arr[mid:])
// Sort second half for binary search
rightSums.sort()
// Check combinations
for s1 in leftSums:
if binarySearch(rightSums, target - s1):
return True
return False
Complexity:
For n=40: 2^40 ≈ 1 trillion operations (impossible), but 2^20 ≈ 1 million (feasible).
Other applications:
1. Closest Subset Sum: Find subset sum closest to target.
2. 4Sum: Find four numbers summing to target.
3. Knapsack with large capacity:
When to use:
Use meet in the middle when:
Limitations:
Why it's powerful: Meet in the Middle transforms infeasible problems into solvable ones. It's the difference between "impossible" and "runs in seconds."
Interview tip: When you see small n (≤ 40) with exponential constraints, consider meet in the middle. It's a rare but impressive technique that shows advanced thinking.
48. Bit Manipulation Tricks
Bit manipulation leverages binary operations to solve problems efficiently. It's fast, space-efficient, and elegant—though sometimes cryptic.
Fundamental operations:
AND (&): Sets bit to 1 only if both bits are 1.
OR (|): Sets bit to 1 if either bit is 1.
XOR (^): Sets bit to 1 if bits differ.
NOT (~): Flips all bits.
Left shift (<<): Multiplies by 2^k.
Right shift (>>): Divides by 2^k.
Essential tricks:
1. Check if k-th bit is set:
(n & (1 << k)) != 0
2. Set k-th bit:
n | (1 << k)
3. Clear k-th bit:
n & ~(1 << k)
4. Toggle k-th bit:
n ^ (1 << k)
5. Check if power of 2:
n > 0 and (n & (n-1)) == 0
Why: Powers of 2 have exactly one bit set. Subtracting 1 flips all bits after it.
6. Count set bits (Brian Kernaghan's algorithm):
count = 0
while n:
n &= (n-1) // removes rightmost set bit
count++
7. Get rightmost set bit:
n & -n
Why: -n in two's complement flips bits and adds 1.
8. XOR properties:
Classic problems:
Single Number: All elements appear twice except one. Find it.
result = 0
for num in arr:
result ^= num
return result // all pairs cancel out
Missing Number: Array contains 0 to n except one. Find missing.
xor = 0
for i in range(n+1):
xor ^= i
for num in arr:
xor ^= num
return xor
Power Set: Generate all subsets.
for mask in range(1 << n): // 2^n subsets
subset = []
for i in range(n):
if mask & (1 << i):
subset.append(arr[i])
Swap without temp variable:
a ^= b
b ^= a
a ^= b
Maximum XOR of two numbers: Use trie of binary representations.
When to use:
Use bit manipulation for:
Advantages:
Disadvantages:
Interview tips:
But manipulation is like a secret weapon. It's not needed often, but when it is, nothing else works as elegantly.
49. Advanced Problem-Solving Strategies
Beyond specific patterns, developing meta-strategies for approaching problems is crucial for consistent success.
Strategy 1: Problem decomposition
Break complex problems into smaller, manageable pieces.
Approach:
Example: Design a data structure supporting insert, delete, and getRandom in O(1).
Strategy 2: Invariant thinking
Identify properties that remain true throughout algorithm execution.
Example: Binary search maintains: target, if it exists, is always in [left, right].
Example: Quick select maintains: k-th element is in the current partition being processed.
Thinking in invariants helps verify correctness and avoid subtle bugs.
Strategy 3: Problem transformation
Convert problems into equivalent but easier forms.
Example: Longest palindromic substring → Manacher's algorithm (transform string to avoid even/odd cases).
Example: Interval scheduling → Sort by end time, then greedy.
Strategy 4: Symmetry exploitation
Use symmetry to reduce work.
Example: Palindrome checking—only check half the string.
Example: 2D matrix problems—sometimes can optimise by processing symmetrically.
Strategy 5: Constraint analysis
Extract insights from problem constraints.
Small n (n ≤ 20): Exponential solutions like backtracking or meet-in-middle are acceptable.
Large n (n ≤ 10^6): Need O(n) or O(n log n).
Multiple queries: Preprocessing is worthwhile.
Online vs offline: Can you process all queries together or must answer immediately?
Strategy 6: Work backwards
Some problems are easier to solve in reverse.
Example: Finding shortest path from A to all nodes? Run Dijkstra from A. Finding shortest path from all nodes to A? Run Dijkstra from A on reversed graph.
Example: Dynamic programming often works backwards from goal state.
Strategy 7: Relaxation and tightening
Solve a simpler version first, then add constraints.
Example: Start with unsorted version, then optimise for sorted input.
Example: Solve for positive integers, then extend to handle negatives.
Strategy 8: Pattern matching across domains
Recognise when problems are isomorphic to known problems.
Example: Course prerequisites → Topological sort (graph problem).
Example: Maximum profit with cooldown → State machine DP.
Example: Find duplicates in array → Cycle detection (treat array as linked list).
Interview problem-solving framework:
Meta-learning:
The more problems you solve, the stronger your pattern recognition becomes. Keep a "pattern journal" noting:
Over time, you'll develop intuition that guides you to solutions faster.
50. Pattern Recognition in Problem Solving
The ability to recognise patterns is what separates good problem-solvers from great ones. Patterns let you quickly map unfamiliar problems to known solutions.
How to build pattern recognition:
1. Categorise problems as you solve them
Maintain mental (or actual) categories:
2. Identify trigger keywords
Certain phrases signal specific approaches:
Keywords | Likely Pattern |
"Maximum/minimum", "optimal" | Greedy or DP |
"Count number of ways" | DP or combinatorics |
"All possible" | Backtracking |
"Shortest path" | BFS (unweighted) or Dijkstra |
"Subarray/substring" | Sliding window or prefix sum |
"Two elements sum to" | Two pointers or hash map |
"In-place", "constant space" | Swapping or bit manipulation |
"Next greater/smaller" | Monotonic stack |
"Cycle detection" | Floyd's algorithm or graph DFS |
3. Study problem structure
Linear structure (array/string):
Hierarchical (tree):
Relational (graph):
Recursive subproblems:
4. Recognise composite patterns
Complex problems often combine patterns.
Example: "Find longest substring with at most k distinct characters"
5. Anti-patterns (what NOT to do)
Don't immediately jump to complex solutions. Try:
Don't memorize solutions blindly. Understand:
6. Practice deliberately
Solve problems by category:
This builds pattern strength faster than random practice.
7. Review and reflect
After solving a problem:
Common pattern combinations:
Binary search + DP: Search over possible answers, use DP to check feasibility.
Graph + DP: DP on DAGs, tree DP.
Greedy + sorting: Many greedy algorithms require sorting first.
Sliding window + hash map: Variable window with character/element tracking.
Comments
Post a Comment