Back to all questions

How can you determine an algorithm’s complexity?

Algorithms and data structuresJunior/Middle/Senior
Seen on interview:0 users

First, let’s clarify what algorithmic complexity is. It’s a way to estimate how many resources an algorithm uses as the input size grows. There are two ways to measure an algorithm’s complexity.

Time Complexity

The time complexity of an algorithm is a measure of how much it slows down as the amount of data grows. Big O notation is used here, and it provides an upper bound.

| Нотація | Назва | Пояснення |
|--------------|----------------------|--------------------------------------------------------|
| O(1) | Константна | Always takes the same amount of time. |
| O(log n) | Логарифмічна | Grows very slowly as n increases. |
| O(n) | Лінійна | Time grows in direct proportion to the input size (n). |
| O(n log n) | Лінійно-логарифмічна | Slightly slower than linear, but still efficient. |
| O(n²) | Квадратична | Slow because it has two nested loops. |
| O(2ⁿ) | Експоненційна | Very slow for large n. |

Let’s break it down in more detail.

O(1) - A good example of constant time complexity is an assignment operation or accessing an object property by key.

O(log n) - With this complexity, execution time grows very slowly, even as the input size increases significantly. There are many examples of this, and the simplest one is binary search.

O(n) - With this complexity, the execution time grows in direct proportion to the number of elements. For example, imagine iterating over an array of 100 elements. With a logarithmic-time algorithm, increasing the number of elements to 100,000 would barely affect the runtime, whereas an O(n) algorithm would slow down significantly.

O(n log n) - In simple terms, this is a combination of linear and logarithmic complexity. For example, an algorithm may use a divide-and-conquer strategy O(log n) to split an array and then use iteration O(n) to process all elements.

O(n²) - This happens when, for each element, the algorithm checks all other elements - most commonly when there are two nested loops.

O(2ⁿ) - Exponential complexity is one of the slowest. Put simply, it doubles the number of operations with each additional element. For example, with 10 elements, you’d need roughly 1,024 operations.

A simple example of determining an algorithm’s complexity

We have a function that returns the minimum value in an array. Essentially, we have two operations: assigning let min = arr[0] and iterating over the array arr. The assignment operation always takes constant time, O(1), because it doesn’t depend on the number of elements, while iterating over the array depends on how many elements it contains, so the complexity is O(n). The overall complexity of an algorithm is determined by the most expensive operation. In our case, that’s iterating over the array, so the time complexity of the findMin function is O(n).

function findMin(arr) {
let min = arr[0]; // O(1)
for (let i = 1; i < arr.length; i++) // O(n)
if (arr[i] < min) min = arr[i];
}
return min;
}

Space Complexity

This is the amount of additional memory an algorithm needs, beyond the input data. For example, creating extra variables, arrays, objects, and so on. To determine space complexity, we use the same Big O notation, but instead of counting operations, we focus on how much extra memory (data structures) is being used.

After answering the question, it’s a big plus to emphasize the importance of understanding complexity, since it directly relates to understanding data structures. And data structures are a foundation for understanding modern distributed systems, databases, caching, and many other important topics.

Seen on interview?

Comments (0)

Sign in to leave a comment

No comments yet. Be the first!