Time complexity is one of the most interesting concepts you can learn from computer science, and you don’t need a degree to understand it!
It’s interesting because it helps you see why a particular algorithm or program may be slow & what can you do to make it faster.
You can apply this to your own code.
This goes beyond all the fancy algorithms that you find in computer science books, as I will demonstrate for you later in this article.
But first we need to talk about what is slow & what is fast.
Slow vs Fast
Is sorting 1 million numbers in 150ms (milliseconds) slow or fast?
I think that’s pretty fast, but this is not the question that time complexity is trying to answer.
We want to know how an algorithm is going to perform as the “input size” grows.
When we talk about “input size” we are talking about the arguments for the function or method. If a method takes one string as an argument then that’s the input, and the string’s length is the input size.
Big O Notation
With Big O notation we can classify algorithms according to their performance.
It looks like this:
In this case O(1) represents a “constant time” algorithm.
That means that the algorithm will always take the same time to finish its work, regardless of how much work it has to do.
Another is “linear time”:
Where “n” represents the size of the input (string size, array size, etc). This means that the algorithm will finish its work in a 1:1 relation to input size, so doubling the input size will also double the time it takes to complete the work.
Here’s a table:
Always takes the same time.
Amount of work is divided by 2 after every loop iteration (binary search).
Time to complete the work grows in a 1 to 1 relation to input size.
O(n log n)
This is a nested loop, where the inner loop runs in log n time. Examples include quicksort, mergesort & heapsort.
Time to complete the work grows in a input size ^ 2 relation. You can recognize this whenever you have a loop that goes over all the elements & every iteration of the loop also goes over every element. You have a nested loop.
Same as n^2, but time grows in a n^3 relation to input size. Look for a triple nested loop to recognize this one.
Time to complete the work grows in a 2 ^ input size relation. This is a lot slower than n^2, so don’t confuse them! One example is the recursive fibonacci algorithm.
You can start developing your intuition for this if you think about the input as an array of “n” elements.
Imagine that you want to find all the elements that are even numbers, the only way to do this is to read every element once to see if it matches the condition.
You can’t skip numbers because you may miss some of the numbers you are looking for, so that makes this a linear time algorithm O(n).
3 Different Solutions To The Same Problem
Analyzing individual examples is interesting, but what really helps understand this idea is seeing how we can solve the same problem using different solutions.
We are going to explore 3 code examples for a Scrabble solution checker.
Scrabble is a game that given a list of characters (like ollhe) asks you to form words (like hello) using these characters.
This one is a little more interesting. This will be a lot faster than the count version in the best case, where the characters we are looking for are near the start of the string.
But when the characters are all at the end, in reverse order of how they are in found in word, the performance will be similar to that of the count version, so we can say that this is still an O(n^2) algorithm.
We don’t have to worry about the case where none of the characters in word are available, because the all? method will stop as soon as sub! returns false. But you would only know this if you have studied Ruby extensively, way beyond what regular Ruby courses offer you.
Let’s Try With Another Approach
What if we remove any kind of looping inside the block? We can do that with a hash.
The Y axis (vertical) represents how many seconds it took to complete the work, while the X axis (horizontal) represents the input size.
This is a logarithmic chart, which compresses the values in a certain range, but in reality the count line is a lot steeper.
The reason I made this a logarithmic chart is so you can appreciate an interesting effect, with a very small input size count is actually faster!
You learned about algorithmic time complexity, big O notation & time complexity analysis. You also saw a few examples where we compare different solutions to the same problem & analyzed their performance.
Hope you learned something new & interesting!
If you did please share this post on social media & subscribe to the newsletter if you haven’t yet so I can send you more content like this.
Thank you for great articles.
Small question: why did you use Hash.new(1) and not Hash.new(0) for chars counting?