Time Complexity of Algorithms— Big O Notation Explained In Plain English

Original article was published on Artificial Intelligence on Medium

General Discussion

In the above sections, we have learned what time complexity means and how we can use the Big O notation to denote time complexity. More specifically, we studied two realistic examples about O(n) and O(1). As shown in the figure below, the green line shows the O(n) time complexity and the purple line show the O(1) time complexity.

Time Complexity — Big O Notation (Source: Wikipedia)

The figure also shows you other kinds of time complexity, such as O(n*n) and O(sqrt(n)). As you can tell, the steeper the slopes are, the higher the time complexity they have. For example, O(1) complexity has a slope coefficient of 0, while O(n) has a slope coefficient of 1 (or some other fixed number in reality). When the time complexity is n! (3! = 3 * 2 * 1, 4! = 4 * 3 * 2 * 1), the time needed for an input size of 100 or 200 is already enormous!

One outstanding question is how to analyze the time complexity. Well, one simple working solution is that you can test your algorithm with a smaller dataset. If it’s OK time-wise, gradually add more data and see if any performance worsens.

However, this approach works practically, but in reality, you can always have some abstract analysis of your code. In essence, you can simply estimate the number of basic steps that your algorithm needs to run to generate the ideal output.

Take the algorithm that calculates the total sale amount, you know that if we have X number of records, we need to run the addition for X times. In other words, the algorithm needs X steps to produce the output with input data of size X. In this case, we can presume that the algorithm has a time complexity of O(n).

Consider the following hypothetical example. In the code below, we have a nested loop (an iteration loop nested in another loop). As you can estimate, the total number of iterations will be equal to n * n. For example, if n=4, we’ll have to run the addition for 16 times. If n=5, the number of additions will be 25.

With these simple calculations, do you know what the time complexity of this algorithm is?

def another_algorithm(n):
running_total = 0
for number_i in n:
for number_j in n:
running_total += number_i + number_j
return running_total

Yes, I guess you got the right answer — the time complexity is O(n*n). It’s not that hard, right?

The present article uses some simplified examples, but in reality, the analysis of complex algorithms can be a lot harder. Nevertheless, the purpose of the current piece is just to provide a 30,000 feet view of the time complexity of algorithms. I hope that you’ve learned how to use the Big O notation to indicate an algorithm’s time complexity by conducting a very basic analysis.

The key takeaway is that if it’s all possible, you always want to minimize the time complexity of your algorithms.

To learn more about time complexity and Big O notation, here’s an excellent reference that you can take a look at.