Big O Notation

Hatice Surumlu
6 min readJan 15, 2024
grokking algorithms an illustrated guide for programmers and other curious people

Big O Notation is a measure used to determine the efficiency of an algorithm, representing the upper bound of the algorithm’s running time. In other words, it provides an estimation of the worst-case complexity of an algorithm.
Big-O Notation predicts how long your code takes to run with different input sets. (NOTE: Big-O does not express the speed of our algorithm in seconds; instead, it describes the number of operations and how quickly an algorithm grows.)

You can consider Big-O notation as a tool to measure how efficient your code is as the input size increases.

Let’s say we have a list of 100 numbers. We want to try to find the position of the number 37 among these numbers. Which method should we choose for this: binary search or simple search?

(By the way, you’re not familiar with these two methods, there’s no need to worry. Our current focus is not on understanding how they work.)

penjee.com

The selected algorithm needs to be both fast and accurate. The simple search algorithm is slower as it checks each item one by one, while the binary search algorithm is faster. However, the simple search algorithm is easier to implement and has a lower probability of errors.

Let’s examine our performance on a 100-element list by trying out both algorithms:

Assuming that the time it takes to check a number is 1 millisecond.

With the Simple Search algorithm, you would need to examine all 100 numbers in the worst-case scenario, which would take a total of 100 ms.

However, using the Binary Search algorithm, this process would take approximately “log2(100)” ms.

log(100) / log(2) = 6.64385618… ms

In other words, it would approximately take 7 ms.

(If you didn’t understand the above process, don’t worry, our current focus is not on this :D But if you’re curious, I recommend taking a look at how binary search algorithms work.)

You might not have found this millisecond difference to be very significant.

Now, let’s calculate the execution times assuming our list is not 100 elements but rather 10,000 or even 1,000,000,000 elements. 🤔

You can view the calculated version in the graph below:

Our Simple Search algorithm would have required a full 11 days to complete the task! On the other hand, our Binary Search algorithm would have located the desired number in just 32 milliseconds. 😎

Big O Notation is an asymptotic notation that allows us to express the performance or complexity of algorithms based on their input.

With Big O, a programmer can better understand:

1- The worst-case scenario of an algorithm,

2- Its execution time,

3- Memory requirements.

Visualizing different Big O run times

Let’s now reinforce the topic with an example from the book I love so much, ‘Grokking Algorithms: An illustrated guide for programmers and other curious people.’ By the way, I’ll leave the link to this book in the references section. I highly recommend checking it out. It’s an amazing book! ❤️

Let’s assume that we need to draw a 16-box grid on a paper. Below, I will discuss two drawing methods. In your opinion, which algorithm is more reasonable?

Algorithm 1:

The first approach is to draw each box one by one. Let’s recall once again: Big O notation represents the number of operations performed.

In this algorithm, we draw one box in a single operation step. For a grid of 16 boxes, we perform 16 operations.

What do you think the running time of this algorithm is? (I will also touch upon the second example and provide the answer. 😊)

Algorithm 2:

Now, let’s try to solve this problem by folding the paper. During each folding operation, 2 boxes are created.

Let’s fold the paper repeatedly.

After 4 folds, let’s unfold the paper, and voila, we have a splendid grid of 16 boxes ready! 🤩

In this example, we performed a total of 4 folding operations to achieve a 16-box grid.

The running time of Algorithm 1 is O(n). (We drew each box with 1 operation, and for 16 boxes, we performed a total of 16 operations.)

The running time of Algorithm 2 is O(logn). (We created 16 boxes with 4 operations: log2(16) = 4)

Finding and Comparing the Big-O Notations of Algorithms

When writing the Big-O notation, the term that grows the fastest in the input is referred to as the dominant term. For example, consider our equation g(n) = n² + 5n + 6. Here, n² grows faster than n, so it will be O(n²).

In Big-O notation, ignore non-dominant terms and constants. Because, as the input value approaches infinity, the term that will govern the growth is the dominant term. Let’s solve a few examples.

Before we dive into examples, it’s beneficial to remember the following famous hierarchy:

Examples:

  • g(n) = 9n -> Simplified as O(n).
  • g(n) = 7n² + 6n + 1001 -> Simplified as O(n²).

g(𝑛) = 7𝑛 + 56 -> O(n)
g(𝑛) = 47𝑛 -> O(n)
g(n,m) = 5.n.m+ 3.n + 56-> 3.𝑛.𝑛 + 4𝑛 -> O(𝑛²)

𝑓(𝑛) = 5 𝑛⁰ + 𝑛³ 𝑙𝑜𝑔 (𝑛) + 𝑛𝑙𝑜𝑔 (𝑛) + 0.9 𝑛³ –> O(𝑛³)

In this article, I aimed to explain the basics of Big-O Notation. I provided detailed explanations with as many examples as possible because in my next article, we will perform Big-O notation calculations based on algorithms. I hope this writing has been useful for you.

If you have any questions regarding the topic, feel free to message me anytime. 🎈

REFERENCES:

--

--