FoundationsLesson 3

Big O Notation — The Basics

The universal language for measuring how fast or slow an operation is as data grows. O(1), O(n), O(n²) and when each matters.

What Is It?

Big O notation answers one question: as the input gets bigger, how much slower does the algorithm get?

We don't measure exact time because exact time depends on your CPU, your programming language, what else is running, and a dozen other factors. Instead, we measure growth rate — how the number of steps scales as the input size grows.

If an algorithm always takes the same number of steps no matter how big the input is, it's O(1) — constant time — meaning it's always fast no matter how many items you have. If it takes roughly one step per item, it's O(n) — linear time. If it takes roughly n × n steps, it's O(n²) — quadratic time.

This matters a lot. An O(n) algorithm on 1 million items might take 1 second. An O(n²) algorithm on the same input could take over 11 days.

Analogy

The Roll Call Analogy

Imagine a teacher with a classroom of n students:

  • O(1) — Constant: "Is the light switch on?" The teacher glances at the switch. Doesn't matter if there are 10 students or 10,000 — checking the switch takes one look.
  • O(n) — Linear: "Call every student's name for attendance." With 30 students, 30 name calls. With 300 students, 300 name calls. Double the students, double the work.
  • O(n²) — Quadratic: "Every student must shake hands with every other student." With 5 students, that's about 25 handshakes. With 50 students, about 2,500 handshakes. Double the students, quadruple the work.
  • O(log n) — Logarithmic: "Find a student in an alphabetically sorted roster by opening to the middle, picking which half, then repeating." With 1,000 students, about 10 checks. With 1,000,000 students, about 20 checks. Double the students, add just one more step.
  • O(n log n) — Linearithmic: "Sort the class by height using a smart sorting method." Better than everyone comparing with everyone (n²), but more than a single pass (n).

How It Works

The Five Complexity Classes You Must Know

n (input size)operations5101520100200300400O(1)O(log n)O(n)O(n log n)O(n²)

O(1) — Constant Time

O(1) — constant time — means it's always fast no matter how many items you have. The number of steps never changes.

pseudocode
1
2
function getFirst(array):
return array[0] // always 1 step

Whether the array has 10 elements or 10 million, reading index 0 is one step.

n = 10
1op
n = 100
1op
n = 1,000
1op
n = 1,000,000
1op

O(n) — Linear Time

O(n) — linear time — means the work grows in proportion to the input. If the input doubles, the work doubles.

pseudocode
1
2
3
4
5
6
function findMax(array):
max = array[0]
for each item in array:
if item > max:
max = item
return max

One pass through all n elements = O(n).

n = 10
10ops
n = 100
100ops
n = 1,000
1,000ops
n = 1,000,000
1,000,000ops

O(n²) — Quadratic Time

O(n²) — quadratic time — usually comes from a loop inside a loop. If the input doubles, the work quadruples.

pseudocode
1
2
3
4
5
6
function hasDuplicate(array):
for each item A in array:
for each item B in array (after A):
if A == B:
return true
return false

For every element, you check every other element. That's n × n steps.

n = 10
100ops
n = 100
10,000ops
n = 1,000
1,000,000ops
n = 1,000,000
1 trillionops

O(log n) — Logarithmic Time

O(log n) — logarithmic time — means the input is cut in half at each step. Even with a million items, you only need about 20 steps.

pseudocode
1
2
3
4
5
6
7
8
9
10
11
12
function binarySearch(sortedArray, target):
low = 0
high = length(sortedArray) - 1
while low <= high:
mid = (low + high) / 2
if sortedArray[mid] == target:
return mid
else if sortedArray[mid] < target:
low = mid + 1
else:
high = mid - 1
return -1

Each step cuts the remaining search space in half. You can only halve a million items about 20 times before you're down to 1.

n = 10
~3ops
n = 100
~7ops
n = 1,000
~10ops
n = 1,000,000
~20ops

O(n log n) — Linearithmic Time

O(n log n) is what you get from efficient sorting algorithms like merge sort. It's much better than O(n²) and only slightly worse than O(n).

n = 10
~33ops
n = 100
~664ops
n = 1,000
~9,966ops
n = 1,000,000
~20Mops

Rules for Calculating Big O

Rule 1: Drop constants. O(3n) is just O(n). We care about the shape of the growth, not the exact multiplier.

pseudocode
1
2
3
4
5
// Two separate loops — still O(n), not O(2n)
for i = 0 to n:
doSomething(i)
for j = 0 to n:
doSomethingElse(j)

Rule 2: Drop smaller terms. O(n² + n) is just O(n²). As n grows large, n² completely dominates n.

Rule 3: One loop = O(n).

pseudocode
1
2
for i = 0 to n: // O(n)
doWork()

Rule 4: Nested loops multiply.

pseudocode
1
2
3
for i = 0 to n: // outer: n times
for j = 0 to n: // inner: n times each
doWork() // total: n × n = O(n²)

Rule 5: Halving = O(log n).

pseudocode
1
2
while n > 1: // O(log n)
n = n / 2

Rule 6: Sequential sections — take the biggest.

pseudocode
1
2
3
4
5
6
7
8
for i = 0 to n: // O(n)
doA()
for i = 0 to n: // O(n²) for this section
for j = 0 to n:
doB()
// Overall: O(n) + O(n²) = O(n²)

Comparison Table

O(1)
1at n=1M
O(log n)
~20at n=1M
O(n)
1Mat n=1M
O(n log n)
20Mat n=1M
O(n²)
1Tat n=1M

Examples

Example 1: Counting the operations

pseudocode
1
2
3
4
5
6
7
8
9
10
11
12
function sumAndProduct(array):
sum = 0
for each item in array: // n steps
sum = sum + item
product = 1
for each item in array: // n steps
product = product * item
return (sum, product)
// Two loops, each n steps = 2n → drop constant → O(n)

Example 2: A loop that looks quadratic but isn't

pseudocode
1
2
3
4
5
6
7
function printPairs(array):
for i = 0 to length(array) - 1:
for j = i + 1 to length(array) - 1:
print(array[i], array[j])
// The inner loop gets shorter each time, but it still runs
// roughly n²/2 times total — drop the constant → O(n²)

Common Mistakes

  1. Thinking O(n) means exactly n operations. O(n) means the work grows linearly. An algorithm doing 5n + 100 operations is still O(n).
  1. Forgetting that Big O describes the worst case by default. When we say searching an unsorted array is O(n), we mean in the worst case you check every element. The best case might be O(1) if the item happens to be first.
  1. Multiplying when you should add. Two sequential loops are O(n) + O(n) = O(n). Two nested loops are O(n) × O(n) = O(n²). Sequential = add. Nested = multiply.
  1. Thinking O(log n) means anything with math. O(log n) specifically means the algorithm cuts the problem in half (or by some fraction) at each step. Binary search does this. Random math doesn't make something O(log n).
  1. Only thinking about time. Big O applies to memory usage too. An algorithm might be fast (O(n) time) but use a lot of memory (O(n²) space).

Best Practices

  • Always identify what n is before analyzing: array length? Number of nodes? String length? Be explicit.
  • Count your loops: one loop over n = O(n), loops inside loops = multiply, loops one after another = add (then keep the biggest)
  • When you see something get cut in half each step, that's O(log n)
  • Use the comparison table to sanity-check your design: if your algorithm is O(n²) and n could be 100,000, it's probably too slow