Algorithm Time Complexity by Example

Some Simple Examples with the TypeScript Playground

George Marklow
Geek Culture
Published in
4 min readJun 20, 2022

--

Photo by Steve Johnson on Unsplash

Introduction

This article uses the TypeScript Playground to illustrate some basic algorithm time complexities.

By the end of this article, you should be familiar with O(1), O(n), O(n²), and O(logN) time complexities.

Background

Time Complexity

Time Complexity is an efficiency metric that describes the quickness of an algorithm with increasing input size in general terms.

This allows you to compare the performance (or cost) of several algorithmic solutions to solve a problem, independent of the programming language or data set, allowing you to choose the most efficient one.

The main point here is that time complexity seeks to describe the growth of an algorithm as the input size increases; not the precise amount of time it will take to perform an execution.

The other efficiency metric, Space Complexity, describes memory (or RAM) used as the input size grows and is beyond the scope of this article.

Big O Notation

Big O notation is used to categorize algorithms based on how their run time needs to expand as the input size grows.

This is denoted using the letter O with categorization followed in brackets: for example, Linear time complexity is O(n), meaning that the number of operations increases stepwise with the size of the input.

Getting Started

Visit the TypeScript playground, and change to the Logs view as shown below:

Constant: O(1)

Overview

Constant-time algorithms will always execute in the same amount of time — the input has no bearing on how long it takes to run.

We will use this trivial example to get accustomed to using the TypeScript Playground.

Code

Add the following code to the left-hand side and click the Run button.

function constant(n: number): number {
return 1;
}
console.log(constant(100));

Result

The result is always 1 regardless of the input size:

[LOG]: 1

Examples

  • Accessing an array value by index
  • Array push/pop operations

Linear: O(n)

Overview

If the time it takes to run an algorithm is exactly proportional to the input size, it is said to have linear-time complexity.

Code

I’ve set up a linear-time complexity function where the input and output increase one-for-one:

function linear(n: number): number {
let count = 0;
for (let a = 0; a < n; a++) {
count++;
}
return count;
}
console.log(linear(100));

Result

[LOG]: 100

Examples

  • Average-case linear search through an array for a given value

Quadratic: O(n²)

Overview

If the time it takes to run an algorithm is proportional to the square of the input size, it has quadratic-time complexity.

Code

Quadratic-time complexities are found in nested iterations, and to simulate this example, I’ve set up the following code to run a nested loop over the same size as the outer loop.

function quadratic(n: number): number {
let count = 0;
for (let a = 0; a < n; a++) {
for (let b = 0; b < n; b++) {
count++;
}
}

return count;
}
console.log(quadratic(100));

Result

[LOG]: 10000

i.e. for an input of 100 we can expect 100² = 10,000 operations.

Examples

  • The worst-case scenario for many sorting algorithms (BubbleSort, InsertionSort, SelectionSort, etc) across an array where all the elements are in reverse order.

Logarithmic: O(LogN)

Overview

If the time it takes to run an algorithm is proportional to the logarithm of the input size n, the algorithm has logarithmic-time complexity.

The logarithm is the inverse function of exponentiation. For example, we can calculate exponents raising the number 2 to various powers:

pow(2,1) = 2
pow(2,2) = 4
pow(2,3) = 8

On the other hand, the base-2 logarithm calculates which exponent is needed against 2 to produce the value shown below in brackets:

log2(2) = 1
log2(4) = 2
log2(8) = 3

For an O(logN) algorithm, this means that as the input value gets bigger, the number of operations grows at a slower rate.

Code

To illustrate the slowing growth rate, I’ve modified the calling code to print the input size alongside the number of operations from a loop where the input increases by two in each iteration:

function logn(n: number): number {
let count = 0;
let a = 1;

while (a < n) {
count++;
a *= 2;
}
return count;
}
for (var i = 2; i < 18; i += 2) {
console.log(`Input: ${i}. Operations: ${logn(i)}`);
}

Result

You can see that as the input gets bigger, the number of operations grows at a slower rate i.e. appears to gradually “flatten out”.

[LOG]: "Input: 2. Operations: 1" 
[LOG]: "Input: 4. Operations: 2"
[LOG]: "Input: 6. Operations: 3"
[LOG]: "Input: 8. Operations: 3"
[LOG]: "Input: 10. Operations: 4"
[LOG]: "Input: 12. Operations: 4"
[LOG]: "Input: 14. Operations: 4"
[LOG]: "Input: 16. Operations: 4"

Examples

O(logN) behavior appears in Divide-and-Conquer algorithms:

  • The average-case scenario when accessing, searching, inserting, or deleting from a Binary Search Tree.

Thanks for reading! Let me know what you think in the comments section below, and don’t forget to subscribe. 👍

--

--

George Marklow
Geek Culture

George is a software engineer, author, blogger, and abstract artist who believes in helping others to make us happier and healthier.