Big O Notation: What and Why It Matters?
Hack United is an organization that empowers hackers and builders! Join us on hackathons in your free time!
When you write code, there are a multitude of ways to solve problems and make your ideas come to life. But there’s one thing to note: We are programmers! We are LAZY! This is why whenever we write code, we should always double-check that it’s the simplest and most efficient way to develop the idea.
This is where Big O Notation comes in. Big O Notation gives you the language for that. It is a way to measure how an algorithm scales. Instead of thinking about seconds or milliseconds, you think about how the work grows as data grows.
If you have an array with ten elements, almost any algorithm will look fine. But scale it to ten million, and the difference between a loop and a binary search becomes massive. Big O notation is what lets you compare these tradeoffs.
Bubble Sort
Bubble sort is one of the simpler sorting algorithms. You compare each pair of elements and swap them if they are out of order. You keep repeating this until the list is sorted.
[3, 1, 4, 2] -> [1, 3, 4, 2] -> [1, 3, 2, 4] -> [1, 2, 3, 4]
Its complexity is O(n²). For each element, you may need to compare it with all others. If n, the number of elements, equals 10,000, you end up with 100 million comparisons. That is why bubble sort is only practical for arrays of a smaller size.
Merge Sort
Merge sort follows a divide-and-conquer approach (we are lazy programmers, remember)—you split the array into halves until each piece has one element. Then you merge the pieces back together in sorted order.
[1, 3, 2, 4] -> [1, 3] [2, 4] -> [1] [3] [2] [4] - [1, 2, 3, 4]
Its complexity is O(n log n). Splitting the array takes log n steps, and merging takes n steps. Multiply them together and you get n log n. For large inputs, this is far more efficient than O(n²). Sorting 10,000 elements takes about 133,000 operations compared to bubble sort’s 100 million. Being programmers, I would rather have my system do 133,000 operations. Not 100 million. Like me, my machine is also lazy.
Quick Sort
Quick sort also uses the divide and conquer approach (once again, laziness for the win)—you pick a pivot element, partition the array into values smaller and larger than the pivot, then repeat on each side.
[1, 3, 2, 4], pivot = 2 -> [1] [2] [3, 4] -> [1, 2, 3, 4]
Its average complexity is O(n log n). Like merge sort, the partitioning creates log n levels, and each level processes n items. In the worst case, if you pick poor pivots, quick sort becomes O(n²). Randomized pivot selection helps keep performance near O(n log n) in practice.
Insertion Sort
Insertion sort builds a sorted list one item at a time. For each new element, you compare it with the sorted part of the array and insert it in the right place.
[1, 2, 4], add [3] -> [3, 1, 2, 4] -> [1, 3, 2, 4] -> [1, 2, 3, 4]
Its complexity is O(n²) in the average case. For each element, you may need to shift many others to make space. For small arrays, insertion sort can still perform well, because its overhead is low. That is why some libraries use insertion sort for small chunks inside more advanced algorithms.
Why This Matters
You rarely write your own sorting algorithm from scratch, but you do choose how you process data. Knowing the difference between O(n²) and O(n log n) helps you predict performance. When data is small, almost anything works. When data is large, only efficient algorithms scale.
The point of Big O is not to memorize numbers, but to recognize patterns. Nested loops often mean O(n²). Divide and conquer often means O(n log n). With that awareness, you make better choices before your program slows down.

