The Big O

The problem with measuring functions with timers is that different machines record different times; even the same machine records different times. Furthermore, as algorithms get faster, speed measurements are typically not precise enough.


So rather than counting time, it’s better to count the number of simple operations that the computer has to perform. And this is why for loops are so inefficient. Each time the for loop iterates, it counts as an additional simple operation. Welcome to Big O – the formalized art of fuzzy counting!


O(function(n)) is the number of simple operations that the computer has to do is eventually less than a constant times function(n) as n increases. The O(function(n)) could be linear (function(n) = n), quadratic (function(n) = n2), constant (function(n) = 1) or something else. When discussing the Big O, we’re always talking about the worse case scenario (aka the upper bound for runtime).


As a rule of thumb, arithmetic operations, variable assignments and accessing elements in an array or object are constant O(1). A loop is O(n) where the time complexity equals the length of the loop times the complexity of whatever happens inside the loop. Nested loops are O(n2).


In addition to time complexity (where we analyze the runtime of an algorithm as the size of the inputs increases), it’s also important to formalize art of fuzzy counting for memory space usage. Generally, when we’re working with space complexity we’re referring to the space required by the algorithm itself and not the space taken up by the inputs (aka auxiliary space complexity).


As a rule of thumb, most primitive data types including booleans, numbers, undefined and null are O(1) constant space. Interestingly, in a for loop, it doesn’t matter the time complexity of each iteration for determining space complexity. Hence, many for loops have a space complexity of O(1). However, if a new array is being created from an old array, this would have a space complexity of O(n) where n is the array input passed in as a parameter. In addition, strings require O(n) space where n is the strings length. Reference types are generally O(n) space where n is the length for arrays or the number of keys for objects.


Last but not least, Big O notations often refer to logarithms. As a review log2(3) = 3 or 23 = 8. In Big O, the log2 is generally omitted for just log. Here is the long boring and confusing mathematical definition of a logarithm (lol). The logarithm of a number roughly measures the number of times you can divide that number by 2 before you get a value that’s less than or equal to one. I wrote out this mathematical definition because it’s humorous, non-sensical and generally why most of us hate math. Say this out loud 10 times or write it out 10 times and it still doesn’t make much sense! In non-nerd lingo, a logarithm means if you take 8 it can be divided by 2 (4), divided by 2 (2), and divided by 2 (1). We divided by 2 three times and so the logarithm is 3. The main point is that if we have a O(log n) time or space complexity that’s super fast. Why does this even matter? Well, most searching algorithms and efficient sorting algorithms have logarithmic time complexity. Oh yeah and our favorite recursion sometimes has logarithmic space complexity.


This chart shows the performance of different time or space complexities. O(1) and O(log n) are clearly the best. O(n) is exactly in the middle. O(n log n) and O(n2) are clearly the worst.