The Definitive Guide To Time Complexity For Ruby Developers

Time complexity is one of the most interesting concepts you can learn from computer science, and you don’t need a degree to understand it!

It’s interesting because it helps you see why a particular algorithm or program may be slow & what can you do to make it faster.

You can apply this to your own code.

This goes beyond all the fancy algorithms that you find in computer science books, as I will demonstrate for you later in this article.

But first we need to talk about what is slow & what is fast.

Slow vs Fast

Is sorting 1 million numbers in 150ms (milliseconds) slow or fast?

I think that’s pretty fast, but this is not the question that time complexity is trying to answer.

We want to know how an algorithm is going to perform as the “input size” grows.

When we talk about “input size” we are talking about the arguments for the function or method. If a method takes one string as an argument then that’s the input, and the string’s length is the input size.

Big O Notation

With Big O notation we can classify algorithms according to their performance.

It looks like this:

O(1)

In this case O(1) represents a “constant time” algorithm.

That means that the algorithm will always take the same time to finish its work, regardless of how much work it has to do.

Another is “linear time”:

O(n)

Where “n” represents the size of the input (string size, array size, etc). This means that the algorithm will finish its work in a 1:1 relation to input size, so doubling the input size will also double the time it takes to complete the work.

Here’s a table:

Notation Name Description
O(1) Constant Always takes the same time.
O(log n) Logarithmic Amount of work is divided by 2 after every loop iteration (binary search).
O(n) Linear Time to complete the work grows in a 1 to 1 relation to input size.
O(n log n) Linearithmic This is a nested loop, where the inner loop runs in log n time. Examples include quicksort, mergesort & heapsort.
O(n^2) Quadratic Time to complete the work grows in a input size ^ 2 relation. You can recognize this whenever you have a loop that goes over all the elements & every iteration of the loop also goes over every element. You have a nested loop.
O(n^3) Cubic Same as n^2, but time grows in a n^3 relation to input size. Look for a triple nested loop to recognize this one.
O(2^n) Exponential Time to complete the work grows in a 2 ^ input size relation. This is a lot slower than n^2, so don’t confuse them! One example is the recursive fibonacci algorithm.

Algorithm Analysis

You can start developing your intuition for this if you think about the input as an array of “n” elements.

Imagine that you want to find all the elements that are even numbers, the only way to do this is to read every element once to see if it matches the condition.

[1,2,3,4,5,6,7,8,9,10].select(&:even?)

You can’t skip numbers because you may miss some of the numbers you are looking for, so that makes this a linear time algorithm O(n).

3 Different Solutions To The Same Problem

Analyzing individual examples is interesting, but what really helps understand this idea is seeing how we can solve the same problem using different solutions.

We are going to explore 3 code examples for a Scrabble solution checker.

Scrabble is a game that given a list of characters (like ollhe) asks you to form words (like hello) using these characters.

Here’s the first solution:

def scramble(characters, word)
  word.each_char.all? { |c| characters.count(c) >= word.count(c) }
end

Do you know what is the time complexity of this code? The key is in the count method, so make sure you understand what that method does.

The answer is: O(n^2).

And the reason for that is that with each_char we are going over all the characters in the string word. That alone would be a linear time algorithm O(n), but inside the block we have the count method.

This method is effectively another loop which goes over all the characters in the string again to count them.

The Second Solution

Ok.

How can we do better? Is there a way to save us some looping?

Instead of counting the characters we could remove them, that way we have less characters to go through.

Here’s the second solution:

def scramble(characters, word)
  characters = characters.dup
  word.each_char.all? { |c| characters.sub!(c, "") }
end

This one is a little more interesting. This will be a lot faster than the count version in the best case, where the characters we are looking for are near the start of the string.

But when the characters are all at the end, in reverse order of how they are in found in word, the performance will be similar to that of the count version, so we can say that this is still an O(n^2) algorithm.

We don’t have to worry about the case where none of the characters in word are available, because the all? method will stop as soon as sub! returns false. But you would only know this if you have studied Ruby extensively, way beyond what regular Ruby courses offer you.

Let’s Try With Another Approach

What if we remove any kind of looping inside the block? We can do that with a hash.

def scramble(characters, word)
  available = Hash.new(1)

  characters.each_char { |c| available[c] += 1 }
  word.each_char.all? { |c| available[c] -= 1; available[c] > 0 }
end

Reading & writing hash values is a O(1) operation, which means it’s pretty fast, especially compared to looping.

We still have two loops:

One to build the hash table, and one to check it. Because these loops aren’t nested this is an O(n) algorithm.

If we benchmark these 3 solutions we can see that our analysis matches with the results:

hash    1.216667   0.000000   1.216667 (  1.216007)
sub     6.240000   1.023333   7.263333 (  7.268173)
count 222.866644   0.073333 222.939977 (223.440862)

Here’s a line chart:

big o chart

Notice a few things:

  1. The Y axis (vertical) represents how many seconds it took to complete the work, while the X axis (horizontal) represents the input size.
  2. This is a logarithmic chart, which compresses the values in a certain range, but in reality the count line is a lot steeper.
  3. The reason I made this a logarithmic chart is so you can appreciate an interesting effect, with a very small input size count is actually faster!

Summary

You learned about algorithmic time complexity, big O notation & time complexity analysis. You also saw a few examples where we compare different solutions to the same problem & analyzed their performance.

Hope you learned something new & interesting!

If you did please share this post on social media & subscribe to the newsletter if you haven’t yet so I can send you more content like this.

Thanks for reading.

8 thoughts on “The Definitive Guide To Time Complexity For Ruby Developers”

  1. Hi!

    Thank you for great articles.

    Small question: why did you use Hash.new(1) and not Hash.new(0) for chars counting?

    • Hi Yura, that’s a great question!

      The reason is that I’m checking for available[c] > 0. For this to work with Hash.new(0) you would have to check for available[c] > -1 because we are removing 1 available[c] -= 1 before the check.

      It has to be in this order because any? expects a boolean value (true / false) to be returned from the block.

  2. Hi Jesus,

    In the text you keep referring to any?, while the code uses all?.

    But more interesting, in your last solution of scramble you use each_char in 2 different ways.

    When applied to characters, each_char is used combined with a block, so it iterates over each character directly.
    But when applied to word, there is no direct iteration over each character, but the enumerator is passed to all?, which in turns has a block that finally performs the iteration.

    If you are not looking carefully, you might think that each_char combined with all? might actually be a loop within a loop.

Comments are closed.