What is the O?
As a developer working with real world systems, it is not always enough to solve the problem. We need to solve the problem as efficiently as possible. That means reducing the cost of our calculations. In order to think about this, we need some vocabulary, so let’s define our terms.
Cost is being valued in the number of calculations the computer must do in a worst case scenario. This is another way of expressing Time Complexity in computer science. For example, if we have a function that looks at every letter of a string, the cost of the function is equal to the length of the string in the worst case, ie. where the function must go through the entire string before concluding.
That cost in the worst case scenario is called the “Big O”. In the example above, the Big O is represented by O(n) where ‘n’ is the length of the string.
How do we calculate it?
As the complexity of our inputs grow, how many more calculations does the computer have to go through to complete the function in the worst case scenario? That is what we’re looking to calculate. When calculating this, we’re only interested in the exponential growth, since linear or even geometric growth isn’t going to have a large impact on performance.
Mostly, we’re looking for how many times our function will need to loop through our inputs, whether they are arrays, linked lists, hashes or antying else.
Let’s take a couple of simple functions and talk through the Big O.
What are some common examples?
Big O(n)
1 2 3 4 5 6 7 |
|
In the example above, in the worst case scenario our target is the last element in the array. If that were the case, our simpleSearch function would have to go through the entire array one time in order to complete its task. Because of that, the number of potential computations grows proportionally with the length of the array. Hence the Big O is ‘n’.
Big O(n2)
1 2 3 4 5 6 7 8 9 10 11 |
|
In the example above, our function needs to go through the entire array every time is called. For each number in the array, it then needs to check each number in the array to see if the add up to the target. That nested loop within our loop means that the growth of the calculations necessary to complete the function grow at a rate of ‘n’ * ‘n’ with the growth of the length of the array.
Let’s end with another popular one, a binary search. Keep in mind that one requirement for a binary search is that the array is already sorted.
Big O(log(n))
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
|
In this example, we’re cutting the length of the array in half every time we check the value. So if we have an array with 10 elements, we’re only looking up 4 elements in the worst case scenario. Which means that we’re performing less calculations than the total number elements in the array. That’s pretty cool!
Conclusion
There are lots of ways to solve a problem with code and they are not all equal. After solving the problem you’re working on, spend the time working out the Big O of the method and see if you can come up with something better.
Good luck and happy coding!