← Back to Coding Interview Prep

Data Structures & Algorithms

Sorting, searching, trees, linked lists, matrices — visual explanations with Python code and complexity analysis.

7
Sorting Algos
4
Search Methods
5
Tree Types
6
List Operations
5
Matrix Patterns

Sorting Algorithms

From simple O(n²) to efficient O(n log n) — know when to use each

AlgorithmBestAverageWorstSpaceStable?When to Use
Bubble SortO(n)O(n²)O(n²)O(1)YesTeaching only
Selection SortO(n²)O(n²)O(n²)O(1)NoMinimal swaps needed
Insertion SortO(n)O(n²)O(n²)O(1)YesSmall/nearly sorted arrays
Merge SortO(n log n)O(n log n)O(n log n)O(n)YesGuaranteed O(n log n), stability needed
Quick SortO(n log n)O(n log n)O(n²)O(log n)NoGeneral purpose, best avg performance
Heap SortO(n log n)O(n log n)O(n log n)O(1)NoIn-place + guaranteed O(n log n)
Counting SortO(n + k)O(n + k)O(n + k)O(k)YesSmall integer range (k)
Bubble Sort
EasyStableIn-place
Repeatedly swap adjacent elements if they're in the wrong order. Largest "bubbles" to the end.
click to expand
Pass 1: [5, 3, 8, 1, 2] 5>3 swap → [3, 5, 8, 1, 2] 5<8 ok → [3, 5, 8, 1, 2] 8>1 swap → [3, 5, 1, 8, 2] 8>2 swap → [3, 5, 1, 2, 8] ← 8 bubbled to end Pass 2: [3, 5, 1, 2, | 8] 3<5 ok → [3, 5, 1, 2, | 8] 5>1 swap → [3, 1, 5, 2, | 8] 5>2 swap → [3, 1, 2, 5, | 8] Pass 3: [3, 1, 2, | 5, 8] → [1, 2, 3, | 5, 8] ✓ Sorted!
def bubble_sort(arr):
    n = len(arr)
    for i in range(n):
        swapped = False
        for j in range(n - i - 1):
            if arr[j] > arr[j + 1]:
                arr[j], arr[j + 1] = arr[j + 1], arr[j]
                swapped = True
        if not swapped:     # optimization: stop if no swaps
            break
    return arr
Best
O(n)
Avg / Worst
O(n²)
Space
O(1)
Key: The swapped flag optimization gives O(n) best case for already-sorted arrays. Never use in production — teaching only.
Selection Sort
EasyUnstableIn-place
Find the minimum element, put it in the correct position. Repeat for remaining.
click to expand
Array: [64, 25, 12, 22, 11] Step 1: Find min(64,25,12,22,11) = 11 → swap with 64 [11, 25, 12, 22, 64] Step 2: Find min(25,12,22,64) = 12 → swap with 25 [11, 12, 25, 22, 64] Step 3: Find min(25,22,64) = 22 → swap with 25 [11, 12, 22, 25, 64] Step 4: Find min(25,64) = 25 → already in place [11, 12, 22, 25, 64] ✓ Sorted!
def selection_sort(arr):
    n = len(arr)
    for i in range(n):
        min_idx = i
        for j in range(i + 1, n):
            if arr[j] < arr[min_idx]:
                min_idx = j
        arr[i], arr[min_idx] = arr[min_idx], arr[i]
    return arr
All Cases
O(n²)
Space
O(1)
Swaps
O(n)
Key: Always O(n²) — no best-case optimization. But only O(n) swaps, useful when writes are expensive.
Insertion Sort
EasyStableIn-place
Build sorted array one element at a time. Pick next element, insert into correct position.
click to expand
Array: [5, 2, 4, 6, 1, 3] Step 1: key=2 [5, 5, 4, 6, 1, 3] → [2, 5, 4, 6, 1, 3] ↑ sorted Step 2: key=4 [2, 5, 5, 6, 1, 3] → [2, 4, 5, 6, 1, 3] ↑──↑ sorted Step 3: key=6 already in place → [2, 4, 5, 6, 1, 3] ↑─────↑ sorted Step 4: key=1 shift all right → [1, 2, 4, 5, 6, 3] ↑────────↑ sorted Step 5: key=3 shift 4,5,6 right → [1, 2, 3, 4, 5, 6] ✓
def insertion_sort(arr):
    for i in range(1, len(arr)):
        key = arr[i]
        j = i - 1
        while j >= 0 and arr[j] > key:
            arr[j + 1] = arr[j]    # shift right
            j -= 1
        arr[j + 1] = key           # insert
    return arr
Best
O(n)
Avg / Worst
O(n²)
Space
O(1)
Key: Best for small arrays (<20) and nearly sorted data. Python's sorted() uses TimSort which combines merge sort + insertion sort.
Merge Sort
MediumStableDivide & Conquer
Divide array in half recursively, then merge sorted halves back together.
click to expand
[38, 27, 43, 3, 9, 82, 10] / \ [38, 27, 43, 3] [9, 82, 10] / \ / \ [38, 27] [43, 3] [9, 82] [10] / \ / \ / \ | [38] [27] [43] [3] [9] [82] [10] \ / \ / \ / | [27, 38] [3, 43] [9, 82] [10] \ / \ / [3, 27, 38, 43] [9, 10, 82] \ / [3, 9, 10, 27, 38, 43, 82] ✓
def merge_sort(arr):
    if len(arr) <= 1:
        return arr

    mid = len(arr) // 2
    left = merge_sort(arr[:mid])
    right = merge_sort(arr[mid:])
    return merge(left, right)

def merge(left, right):
    result = []
    i = j = 0
    while i < len(left) and j < len(right):
        if left[i] <= right[j]:
            result.append(left[i])
            i += 1
        else:
            result.append(right[j])
            j += 1
    result.extend(left[i:])
    result.extend(right[j:])
    return result
All Cases
O(n log n)
Space
O(n)
Key: Guaranteed O(n log n) — no worst case degradation. Stable. Extra O(n) space. Used for sorting linked lists (no random access needed).
Interview tip: If asked "sort linked list" → merge sort is the answer (O(1) extra space for linked lists).
Quick Sort
MediumUnstableDivide & Conquer
Pick a pivot, partition array so smaller elements go left, larger go right. Recurse on each half.
click to expand
Array: [10, 7, 8, 9, 1, 5] pivot = 5 Partition: elements ≤ 5 go left, > 5 go right [1, 5, | 8, 9, 10, 7] ↑ ↑ left right Recurse left: [1, 5] → already sorted Recurse right: [8, 9, 10, 7] → pivot=7 → [7, | 9, 10, 8] → recurse → [7, 8, 9, 10] Result: [1, 5, 7, 8, 9, 10] ✓
def quick_sort(arr):
    if len(arr) <= 1:
        return arr
    pivot = arr[len(arr) // 2]
    left = [x for x in arr if x < pivot]
    middle = [x for x in arr if x == pivot]
    right = [x for x in arr if x > pivot]
    return quick_sort(left) + middle + quick_sort(right)

# In-place version (interview standard)
def quick_sort_inplace(arr, low, high):
    if low < high:
        pi = partition(arr, low, high)
        quick_sort_inplace(arr, low, pi - 1)
        quick_sort_inplace(arr, pi + 1, high)

def partition(arr, low, high):
    pivot = arr[high]
    i = low - 1
    for j in range(low, high):
        if arr[j] <= pivot:
            i += 1
            arr[i], arr[j] = arr[j], arr[i]
    arr[i + 1], arr[high] = arr[high], arr[i + 1]
    return i + 1
Best / Avg
O(n log n)
Worst
O(n²)
Space
O(log n)
Key: Fastest in practice due to cache locality. Worst case O(n²) when pivot is always min/max — fix with random pivot.
Interview tip: Know both the clean list comprehension version AND the in-place partition version.
Heap Sort
MediumUnstableIn-place
Build a max-heap, then repeatedly extract the maximum to build sorted array from the end.
click to expand
Array as tree: Build Max-Heap: Extract max: 4 9 [sorted→] / \ / \ Swap root↔last 10 3 10 8 then heapify root / \ / / \ / 5 1 8 5 1 3 Parent(i) = (i-1)//2 Left(i) = 2*i + 1 Right(i) = 2*i + 2
def heap_sort(arr):
    n = len(arr)
    # Build max-heap (start from last non-leaf)
    for i in range(n // 2 - 1, -1, -1):
        heapify(arr, n, i)
    # Extract max one by one
    for i in range(n - 1, 0, -1):
        arr[0], arr[i] = arr[i], arr[0]  # swap max to end
        heapify(arr, i, 0)

def heapify(arr, n, i):
    largest = i
    left = 2 * i + 1
    right = 2 * i + 2
    if left < n and arr[left] > arr[largest]:
        largest = left
    if right < n and arr[right] > arr[largest]:
        largest = right
    if largest != i:
        arr[i], arr[largest] = arr[largest], arr[i]
        heapify(arr, n, largest)
All Cases
O(n log n)
Space
O(1)
Key: Guaranteed O(n log n) AND O(1) space. Use Python's heapq module for min-heap. For max-heap, negate values.
Python shortcut: heapq.nlargest(k, arr) and heapq.nsmallest(k, arr) for Top-K problems.
Counting Sort
EasyStableNon-comparison
Count occurrences of each value, then reconstruct sorted array. Only for integers with known range.
click to expand
Array: [4, 2, 2, 8, 3, 3, 1] range: 0-8 Count: idx: 0 1 2 3 4 5 6 7 8 val: 0 1 2 2 1 0 0 0 1 Output: [1, 2, 2, 3, 3, 4, 8] ✓
def counting_sort(arr):
    if not arr:
        return arr
    max_val = max(arr)
    count = [0] * (max_val + 1)
    for num in arr:
        count[num] += 1
    result = []
    for i, c in enumerate(count):
        result.extend([i] * c)
    return result
Time
O(n + k)
Space
O(k)
Key: Beats O(n log n) when k (range) is small. Not comparison-based. Great for ages, scores, small enums.

Search Algorithms

Finding elements efficiently — from linear to binary and beyond

Linear Search
EasyO(n)
Check every element one by one. Works on unsorted arrays.
click to expand
def linear_search(arr, target):
    for i, val in enumerate(arr):
        if val == target:
            return i
    return -1
Best
O(1)
Worst
O(n)
Space
O(1)
Key: Only option for unsorted data. Use in operator in Python: target in arr.
Binary Search
EasyO(log n)
Sorted array required. Compare middle element, eliminate half each time.
click to expand
Find 23 in [2, 5, 8, 12, 16, 23, 38, 56, 72, 91] Step 1: left=0, right=9, mid=4 → arr[4]=16 < 23 → left=5 Step 2: left=5, right=9, mid=7 → arr[7]=56 > 23 → right=6 Step 3: left=5, right=6, mid=5 → arr[5]=23 = 23 → Found at index 5!
def binary_search(arr, target):
    left, right = 0, len(arr) - 1
    while left <= right:
        mid = (left + right) // 2
        if arr[mid] == target:
            return mid
        elif arr[mid] < target:
            left = mid + 1
        else:
            right = mid - 1
    return -1

# Python built-in
import bisect
idx = bisect.bisect_left(arr, target)  # insertion point
Time
O(log n)
Space
O(1)
Key: Use left <= right (not <). Use // for integer division. Array MUST be sorted.
Binary Search Variations
MediumMust Know
First/last occurrence, search in rotated array, find peak element.
click to expand
Find First Occurrence
def first_occurrence(arr, target):
    left, right = 0, len(arr) - 1
    result = -1
    while left <= right:
        mid = (left + right) // 2
        if arr[mid] == target:
            result = mid       # save and keep searching left
            right = mid - 1
        elif arr[mid] < target:
            left = mid + 1
        else:
            right = mid - 1
    return result
Search in Rotated Sorted Array
def search_rotated(nums, target):
    left, right = 0, len(nums) - 1
    while left <= right:
        mid = (left + right) // 2
        if nums[mid] == target:
            return mid
        if nums[left] <= nums[mid]:     # left half sorted
            if nums[left] <= target < nums[mid]:
                right = mid - 1
            else:
                left = mid + 1
        else:                            # right half sorted
            if nums[mid] < target <= nums[right]:
                left = mid + 1
            else:
                right = mid - 1
    return -1
Find Peak Element
def find_peak(nums):
    left, right = 0, len(nums) - 1
    while left < right:
        mid = (left + right) // 2
        if nums[mid] < nums[mid + 1]:
            left = mid + 1       # peak is to the right
        else:
            right = mid           # peak is mid or left
    return left
Key: All O(log n). The trick is identifying which half is sorted/valid and narrowing accordingly.
DFS vs BFS (Graph/Tree Search)
MediumTraversal
DFS goes deep (stack/recursion), BFS goes wide (queue). Choose based on problem.
click to expand
1 / \ 2 3 DFS order: 1, 2, 4, 5, 3, 6, 7 (go deep first) / \ / \ BFS order: 1, 2, 3, 4, 5, 6, 7 (level by level) 4 5 6 7
DFS (Stack / Recursion)
# Recursive
def dfs(node, visited):
    if not node or node in visited:
        return
    visited.add(node)
    print(node.val)
    dfs(node.left, visited)
    dfs(node.right, visited)

# Iterative (stack)
def dfs_iter(root):
    stack = [root]
    while stack:
        node = stack.pop()
        print(node.val)
        if node.right:
            stack.append(node.right)
        if node.left:
            stack.append(node.left)
BFS (Queue)
from collections import deque

def bfs(root):
    queue = deque([root])
    while queue:
        node = queue.popleft()
        print(node.val)
        if node.left:
            queue.append(node.left)
        if node.right:
            queue.append(node.right)

# BFS with levels
def bfs_levels(root):
    queue = deque([root])
    while queue:
        level_size = len(queue)
        for _ in range(level_size):
            node = queue.popleft()
            # process node
            if node.left: queue.append(node.left)
            if node.right: queue.append(node.right)
FeatureDFSBFS
Data structureStack / RecursionQueue (deque)
SpaceO(h) heightO(w) width
Use forPath finding, cycle detection, topological sortShortest path, level-order, nearest neighbor
MemoryBetter for deep, narrow graphsBetter for shallow, wide graphs
Key: BFS finds shortest path in unweighted graphs. DFS is simpler to implement recursively. Use deque for BFS (O(1) popleft).

Linked Lists

Node-based data structures — master the pointer manipulation patterns

Node Definition & Traversal
EasyFoundation
Singly and doubly linked list node classes, creation, and traversal.
click to expand
Singly Linked: [1] → [2] → [3] → [4] → None ↑ head Doubly Linked: None ← [1] ⇄ [2] ⇄ [3] ⇄ [4] → None ↑ head ↑ tail
Singly Linked List
class ListNode:
    def __init__(self, val=0, next=None):
        self.val = val
        self.next = next

# Create: 1 → 2 → 3
head = ListNode(1, ListNode(2, ListNode(3)))

# Traverse
current = head
while current:
    print(current.val)
    current = current.next
Doubly Linked List
class DListNode:
    def __init__(self, val=0):
        self.val = val
        self.prev = None
        self.next = None

# Insert after a node
def insert_after(node, val):
    new = DListNode(val)
    new.next = node.next
    new.prev = node
    if node.next:
        node.next.prev = new
    node.next = new
OperationArrayLinked List
Access by indexO(1)O(n)
Insert at headO(n)O(1)
Insert at tailO(1)*O(1)**
Delete by valueO(n)O(n)
SearchO(n)O(n)
Key: * amortized with dynamic array. ** O(1) only if you maintain a tail pointer.
Reverse Linked List
Easy3-Pointer
The #1 most asked linked list question. Know both iterative and recursive.
click to expand
Before: [1] → [2] → [3] → [4] → None After: None ← [1] ← [2] ← [3] ← [4] ↑ new head Step-by-step (iterative): prev=None, curr=1 → 1.next = None, prev=1, curr=2 prev=1, curr=2 → 2.next = 1, prev=2, curr=3 prev=2, curr=3 → 3.next = 2, prev=3, curr=4 prev=3, curr=4 → 4.next = 3, prev=4, curr=None → return prev(4)
Iterative (preferred)
def reverse(head):
    prev = None
    curr = head
    while curr:
        nxt = curr.next
        curr.next = prev
        prev = curr
        curr = nxt
    return prev
Recursive
def reverse_rec(head):
    if not head or not head.next:
        return head
    new_head = reverse_rec(head.next)
    head.next.next = head
    head.next = None
    return new_head
Key: Save curr.next BEFORE overwriting it. Return prev (new head). Iterative is O(1) space.
Detect Cycle (Floyd's Algorithm)
MediumFast & Slow Pointers
Two pointers — slow moves 1 step, fast moves 2 steps. If they meet, there's a cycle.
click to expand
[1] → [2] → [3] → [4] ↑ ↓ [7] ← [6] ← [5] slow: 1 → 2 → 3 → 4 → 5 → 6 → 7 → 3 → 4 fast: 1 → 3 → 5 → 7 → 4 → 6 → 3 → 5 → 7 ↑ they meet!
def has_cycle(head):
    slow = fast = head
    while fast and fast.next:
        slow = slow.next
        fast = fast.next.next
        if slow == fast:
            return True
    return False

# Find WHERE the cycle starts
def find_cycle_start(head):
    slow = fast = head
    while fast and fast.next:
        slow = slow.next
        fast = fast.next.next
        if slow == fast:
            # Reset slow to head, move both 1 step
            slow = head
            while slow != fast:
                slow = slow.next
                fast = fast.next
            return slow  # cycle start
    return None
Time
O(n)
Space
O(1)
Key: while fast and fast.next — check both to avoid null pointer. To find cycle start: reset slow to head after meeting.
Find Middle Node
EasyFast & Slow
Slow pointer moves 1 step, fast moves 2. When fast reaches end, slow is at middle.
click to expand
def find_middle(head):
    slow = fast = head
    while fast and fast.next:
        slow = slow.next
        fast = fast.next.next
    return slow   # middle node
Key: Same pattern as cycle detection. Used in merge sort for linked lists to split the list in half.
Merge Two Sorted Lists
EasyDummy Node
Use a dummy node to simplify head handling. Compare and attach smaller node.
click to expand
def merge(l1, l2):
    dummy = ListNode(0)
    curr = dummy
    while l1 and l2:
        if l1.val <= l2.val:
            curr.next = l1
            l1 = l1.next
        else:
            curr.next = l2
            l2 = l2.next
        curr = curr.next
    curr.next = l1 or l2   # attach remaining
    return dummy.next
Key: Dummy node avoids special-casing the head. l1 or l2 picks whichever is not None.
Remove Nth Node from End
MediumTwo Pointers
Move fast pointer n steps ahead, then move both. When fast hits end, slow is at the target.
click to expand
Remove 2nd from end: [1] → [2] → [3] → [4] → [5] 1. Advance fast by n=2: slow→[dummy] fast→[2] 2. Move both until fast.next is None: slow→[3] fast→[5] 3. Delete slow.next: [1] → [2] → [3] → [5]
def remove_nth_from_end(head, n):
    dummy = ListNode(0)
    dummy.next = head
    fast = slow = dummy
    # Advance fast by n+1 steps
    for _ in range(n + 1):
        fast = fast.next
    # Move both until fast reaches end
    while fast:
        slow = slow.next
        fast = fast.next
    # Delete the node
    slow.next = slow.next.next
    return dummy.next
Key: Use dummy node to handle edge case where head is removed. Gap of n between pointers.

Trees

Binary trees, BST, traversals, and common patterns

Binary Tree Basics
EasyFoundation
Node definition, terminology, and properties.
click to expand
1 ← root (depth 0) / \ 2 3 ← depth 1 / \ \ 4 5 6 ← depth 2 (leaves) Height = 2 (longest root-to-leaf path) Nodes at level d: up to 2^d Total nodes in complete tree of height h: 2^(h+1) - 1
class TreeNode:
    def __init__(self, val=0, left=None, right=None):
        self.val = val
        self.left = left
        self.right = right

# Height of tree
def height(root):
    if not root:
        return -1
    return 1 + max(height(root.left), height(root.right))

# Count nodes
def count(root):
    if not root:
        return 0
    return 1 + count(root.left) + count(root.right)
Key: Most tree problems use recursion. Base case is always if not root. Think: "What does this node need from its children?"
Tree Traversals (4 Types)
MediumMust Know All 4
Inorder (L-Root-R), Preorder (Root-L-R), Postorder (L-R-Root), Level-order (BFS).
click to expand
1 / \ 2 3 Inorder (L-Root-R): 4, 2, 5, 1, 6, 3, 7 / \ / \ Preorder (Root-L-R): 1, 2, 4, 5, 3, 6, 7 4 5 6 7 Postorder (L-R-Root): 4, 5, 2, 6, 7, 3, 1 Level-order (BFS): 1, 2, 3, 4, 5, 6, 7
# Inorder: Left → Root → Right (gives SORTED order for BST!)
def inorder(root):
    if not root: return []
    return inorder(root.left) + [root.val] + inorder(root.right)

# Preorder: Root → Left → Right (used to serialize/copy tree)
def preorder(root):
    if not root: return []
    return [root.val] + preorder(root.left) + preorder(root.right)

# Postorder: Left → Right → Root (used for deletion, eval expression)
def postorder(root):
    if not root: return []
    return postorder(root.left) + postorder(root.right) + [root.val]

# Level-order (BFS): level by level
from collections import deque
def level_order(root):
    if not root: return []
    result, queue = [], deque([root])
    while queue:
        level = []
        for _ in range(len(queue)):
            node = queue.popleft()
            level.append(node.val)
            if node.left: queue.append(node.left)
            if node.right: queue.append(node.right)
        result.append(level)
    return result
Mnemonic: Inorder = In sorted order (for BST). Preorder = root comes first. Postorder = root comes last.
Binary Search Tree (BST)
MediumBST Property
Left subtree < root < right subtree. Search, insert, validate.
click to expand
BST Property: Valid BST: Invalid BST: all left < root 8 8 all right > root / \ / \ 3 10 3 10 / \ \ / \ \ 1 6 14 1 6 14 / \ / \ 4 7 4 12 ← 12 > 8, wrong side!
# Search in BST - O(log n) average
def search(root, target):
    if not root or root.val == target:
        return root
    if target < root.val:
        return search(root.left, target)
    return search(root.right, target)

# Insert in BST
def insert(root, val):
    if not root:
        return TreeNode(val)
    if val < root.val:
        root.left = insert(root.left, val)
    else:
        root.right = insert(root.right, val)
    return root

# Validate BST (tricky! must pass min/max bounds)
def is_valid_bst(root, lo=float('-inf'), hi=float('inf')):
    if not root:
        return True
    if root.val <= lo or root.val >= hi:
        return False
    return (is_valid_bst(root.left, lo, root.val) and
            is_valid_bst(root.right, root.val, hi))
Search/Insert
O(log n) avg
Worst (skewed)
O(n)
Key: Validate BST needs min/max bounds, not just comparing parent-child. Inorder traversal of BST gives sorted order.
Classic Tree Problems
MediumTop Interview Questions
Max depth, symmetric tree, invert tree, LCA, path sum — all follow the same recursive pattern.
click to expand
# Max Depth
def max_depth(root):
    if not root: return 0
    return 1 + max(max_depth(root.left), max_depth(root.right))

# Invert Tree (mirror)
def invert(root):
    if not root: return None
    root.left, root.right = invert(root.right), invert(root.left)
    return root

# Is Symmetric
def is_symmetric(root):
    def mirror(l, r):
        if not l and not r: return True
        if not l or not r: return False
        return l.val == r.val and mirror(l.left, r.right) and mirror(l.right, r.left)
    return mirror(root, root)

# Lowest Common Ancestor
def lca(root, p, q):
    if not root or root == p or root == q:
        return root
    left = lca(root.left, p, q)
    right = lca(root.right, p, q)
    if left and right: return root   # p and q on different sides
    return left or right

# Path Sum (root to leaf == target)
def has_path_sum(root, target):
    if not root: return False
    if not root.left and not root.right:  # leaf
        return root.val == target
    return (has_path_sum(root.left, target - root.val) or
            has_path_sum(root.right, target - root.val))
Key: Tree recursion pattern: (1) base case if not root, (2) process current node, (3) recurse on children, (4) combine results.
Heap / Priority Queue
MediumTop-K Pattern
Complete binary tree with heap property. Python's heapq is a min-heap.
click to expand
Min-Heap: Max-Heap: 1 9 / \ / \ 3 5 7 8 / \ / \ 7 8 3 5 Parent ≤ children Parent ≥ children
import heapq

# Min-Heap operations
h = []
heapq.heappush(h, 5)       # push
heapq.heappush(h, 3)
heapq.heappush(h, 7)
smallest = heapq.heappop(h) # pop smallest → 3
peek = h[0]                 # peek without removing → 5

# Max-Heap trick: negate values
heapq.heappush(h, -val)     # push negative
max_val = -heapq.heappop(h) # pop and negate

# Top K Largest — O(n log k)
def top_k_largest(nums, k):
    return heapq.nlargest(k, nums)

# Kth Largest Element
def kth_largest(nums, k):
    min_heap = nums[:k]
    heapq.heapify(min_heap)    # O(k)
    for num in nums[k:]:
        if num > min_heap[0]:
            heapq.heapreplace(min_heap, num)  # pop+push
    return min_heap[0]
Push/Pop
O(log n)
Peek
O(1)
Heapify
O(n)
Key: Use min-heap of size k for "Top K largest". Use max-heap (negated) for "Top K smallest". heapq.heapify() is O(n), not O(n log n).

Matrices (2D Arrays)

Grid traversal, rotation, spiral, and search patterns

Matrix Basics & Traversal
EasyFoundation
Create, access, traverse, and get neighbors in a 2D grid.
click to expand
Matrix[row][col]: Directions (4-way): col→ 0 1 2 (-1,0) up row ┌───┬───┬───┐ ↑ 0 │ 1 │ 2 │ 3 │ (0,-1)← (r,c) →(0,+1) ├───┼───┼───┤ ↓ 1 │ 4 │ 5 │ 6 │ (+1,0) down ├───┼───┼───┤ 2 │ 7 │ 8 │ 9 │ 8 directions: add diagonals └───┴───┴───┘ (-1,-1) (-1,+1) (+1,-1) (+1,+1)
# Create m×n matrix
rows, cols = 3, 4
matrix = [[0] * cols for _ in range(rows)]
# WRONG: [[0]*cols]*rows  ← all rows share same reference!

# Get dimensions
rows = len(matrix)
cols = len(matrix[0])

# 4-directional neighbors
directions = [(-1,0), (1,0), (0,-1), (0,1)]  # up,down,left,right
for dr, dc in directions:
    nr, nc = r + dr, c + dc
    if 0 <= nr < rows and 0 <= nc < cols:
        # valid neighbor at (nr, nc)

# Traverse all cells
for r in range(rows):
    for c in range(cols):
        val = matrix[r][c]
Gotcha: [[0]*n]*m creates m references to the SAME list. Always use list comprehension for 2D arrays.
Grid DFS / BFS (Number of Islands)
MediumFlood Fill
Connected component counting on a grid. DFS to "sink" visited cells.
click to expand
Grid: After DFS from (0,0): 1 1 0 0 0 0 0 0 0 0 ← island 1 sunk 1 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 ← island 2 found 0 0 0 1 1 0 0 0 1 1 ← island 3 found Answer: 3 islands
def num_islands(grid):
    if not grid: return 0
    rows, cols = len(grid), len(grid[0])
    count = 0

    def dfs(r, c):
        if r < 0 or r >= rows or c < 0 or c >= cols or grid[r][c] != '1':
            return
        grid[r][c] = '0'       # mark visited
        dfs(r-1, c)            # up
        dfs(r+1, c)            # down
        dfs(r, c-1)            # left
        dfs(r, c+1)            # right

    for r in range(rows):
        for c in range(cols):
            if grid[r][c] == '1':
                count += 1
                dfs(r, c)
    return count
Key: Mark visited by modifying grid (no extra space). Bounds check FIRST in DFS. Same pattern for flood fill, max area, perimeter.
Rotate Matrix 90°
MediumTranspose + Reverse
Clockwise: transpose then reverse each row. Counter-clockwise: reverse then transpose.
click to expand
Original: Transpose: Reverse rows: 1 2 3 1 4 7 7 4 1 4 5 6 → 2 5 8 → 8 5 2 ← 90° clockwise! 7 8 9 3 6 9 9 6 3
# Rotate 90° clockwise (in-place)
def rotate(matrix):
    n = len(matrix)
    # Step 1: Transpose (swap rows and columns)
    for i in range(n):
        for j in range(i + 1, n):
            matrix[i][j], matrix[j][i] = matrix[j][i], matrix[i][j]
    # Step 2: Reverse each row
    for row in matrix:
        row.reverse()

# One-liner with zip (creates new matrix)
rotated = [list(row) for row in zip(*matrix[::-1])]
Key: Transpose: swap matrix[i][j] with matrix[j][i]. Only iterate upper triangle (j from i+1).
Spiral Matrix Traversal
MediumBoundary Shrinking
Traverse in spiral order: right → down → left → up, shrinking boundaries after each pass.
click to expand
→ → → → ↓ Output: 1,2,3,4,8,12,11,10,9,5,6,7 ↑ ↓ ↑ ↓ ← ↓ top=0, bottom=2, left=0, right=3 ↑ ← ← ← 1 2 3 4 Step 1: → top row [1,2,3,4] top++ 5 6 7 8 Step 2: ↓ right col [8,12] right-- 9 10 11 12 Step 3: ← bottom row [11,10,9] bottom-- Step 4: ↑ left col [5] left++
def spiral_order(matrix):
    result = []
    top, bottom = 0, len(matrix) - 1
    left, right = 0, len(matrix[0]) - 1

    while top <= bottom and left <= right:
        # → traverse top row
        for c in range(left, right + 1):
            result.append(matrix[top][c])
        top += 1

        # ↓ traverse right column
        for r in range(top, bottom + 1):
            result.append(matrix[r][right])
        right -= 1

        # ← traverse bottom row
        if top <= bottom:
            for c in range(right, left - 1, -1):
                result.append(matrix[bottom][c])
            bottom -= 1

        # ↑ traverse left column
        if left <= right:
            for r in range(bottom, top - 1, -1):
                result.append(matrix[r][left])
            left += 1

    return result
Key: Four boundaries: top, bottom, left, right. Shrink after each direction. Check top <= bottom before left/up passes to avoid duplicates.
Search in Sorted Matrix
MediumStaircase Search
Start from top-right corner. Go left if too large, down if too small.
click to expand
Matrix (sorted rows & cols): Search for 14: 1 4 7 11 Start at 15 (top-right) 2 5 8 12 15 > 14 → go left → 11 3 6 9 16 11 < 14 → go down → 12 10 13 14 17 12 < 14 → go down → 14 → Found!
# Staircase search - O(m + n)
def search_matrix(matrix, target):
    if not matrix: return False
    rows, cols = len(matrix), len(matrix[0])
    r, c = 0, cols - 1     # start top-right

    while r < rows and c >= 0:
        if matrix[r][c] == target:
            return True
        elif matrix[r][c] > target:
            c -= 1              # too big, go left
        else:
            r += 1              # too small, go down
    return False

# Binary search approach (each row sorted, first element > prev row's last)
def search_matrix_bs(matrix, target):
    rows, cols = len(matrix), len(matrix[0])
    left, right = 0, rows * cols - 1
    while left <= right:
        mid = (left + right) // 2
        val = matrix[mid // cols][mid % cols]  # convert 1D→2D
        if val == target: return True
        elif val < target: left = mid + 1
        else: right = mid - 1
    return False
Staircase
O(m + n)
Binary Search
O(log(m*n))
Key: Staircase works when rows AND cols are sorted. Binary search works when matrix is fully sorted (row-major). Convert index: row = mid // cols, col = mid % cols.

Big-O Complexity Cheat Sheet

Know these cold — interviewers always ask about complexity

Data StructureAccessSearchInsertDeleteSpace
ArrayO(1)O(n)O(n)O(n)O(n)
Hash TableN/AO(1)*O(1)*O(1)*O(n)
Linked ListO(n)O(n)O(1)†O(1)†O(n)
Stack / QueueO(n)O(n)O(1)O(1)O(n)
BST (balanced)O(log n)O(log n)O(log n)O(log n)O(n)
HeapO(n)O(n)O(log n)O(log n)O(n)

* average case, O(n) worst with collisions   † O(1) if you have the node reference

ComplexityNameExamplen=1000n=1M
O(1)ConstantHash lookup, array index11
O(log n)LogarithmicBinary search1020
O(n)LinearSingle loop, hash map build1K1M
O(n log n)LinearithmicMerge sort, quick sort10K20M
O(n²)QuadraticNested loops, bubble sort1M1T
O(2ⁿ)ExponentialRecursion w/o memoHugeHeat death
Rule of thumb: If n ≤ 10² → O(n²) OK. If n ≤ 10&sup4; → need O(n log n). If n ≤ 10&sup6; → need O(n). If n ≤ 10&sup9; → need O(log n) or O(1).