Big-O Notation and Calculation

fariz mamad
3 min readJul 7, 2022

--

This article is a part of Algorithm notebook by Fariz Mamad

Algorithm performance depends on the input size and the number of operations it executes. We, the software engineer, have to analyze the performance worst case according to time-space tradeoff.

O-Notation helps us analyze the worst case a.k.a. the upper bound of algorithm performance in terms of time complexity and space complexity.

Source: Data Structures — Asymptotic Analysis from Tutorialspoint

Table of Contents

  1. Common O-Notation from worst to best
  2. O-Notation of time complexity from worst to best
  3. O-Notation of space complexity from worst to best
  4. Procedure to calculate complexity

1. Common O-Notation from worst to best

  1. Factorial — O(n!)
  2. Exponential — O(c^n)
  3. Polynomial — O(n^c)
  4. Superlinear — O(n log n)
  5. Linear — O(n)
  6. Logarithmic — O(log(n))
  7. Constant O(1)
Source: Analysis of Algorithm | Big-O analysis from GeeksForGeeks

1.1 Mathematic Examples

if n = 20:
1. Factorial -> 20! = 2.432902e+1
2. Exponential -> 220 = 1048576
3. Polynomial -> 202 = 400
4. Superlinear -> 20 log20 = 59.9
5. Linear -> 20 = 20
6. Logarithmic -> log20 = 2
7. Constant -> 1 = 1

2. O-Notation of time complexity from worst to best

  1. O(n!) — Factorial Algorithm : brute force algorithm for Traveling Salesman Problem
  2. O(c^n) — Exponential Algorithm : tower of hanoi
  3. O(n^c) — Polynomial Algorithm : bubble sort, selection sort, insertion sort, bucket sort
  4. O(n log n) — Superlinear Algorithm : heap sort, merge sort
  5. O(n) — Linear Algorithm : linear search
  6. O(log n) — Logarithmic Algorithm : binary search
  7. O(1) — Constant Algorithm : ideal

3. O-Notation of space complexity from worst to best

  1. O(n+k) — Sub-linear Algorithm : radix sort
  2. O(n) — Linear Algorithm : quick sort
  3. O(log n) — Logarithmic Algorithm : merge sort
  4. O(1) — Constant Algorithm : linear search, binary search, bubble sort, selection sort, insertion sort, heap sort, shell sort

4. Procedure to calculate complexity

  1. Figure out the input.
  2. Figure out n — input size / max. number of operations.
  3. Express the performance function of algorithm in terms of n
  4. Pay attention only to higher order terms of equation.
  5. Erase constant factor.

4.1 Procedure Example

  1. Constant Multiplication — if f(n) = c.g(n), then O(f(n)) = O(g(n))
  2. Polynomial Function — if f(n) = a_0 + a_1.n + a_2.n² + … + a_m.n^m, then O(f(n)) = O(nm)
  3. Logarithmic Function — if f(n) = log_a n and g(n) = log_b n, then O(f(n)) = O(g(n))
  4. Summation Function — if f(n) = f_1(n) + f_2(n) + … + f_m(n) and f_i(n) <= f_(i+1)(n) for all i = 1,2,…,m, then O(f(n)) = max(f_1(n), f_2(n), …, f_m(n))

References:

  1. Data Structures — Asymptotic Analysis (tutorialspoint.com)
  2. Analysis of Algorithms | Big-O analysis — GeeksforGeeks

--

--