Big O Notation

“Big O” Notation is a mathematical notation used to describe the performance of a given algorithm relative to its input size. The “O” stands for “order of complexity.” Technically there are two types of complexity that scale according to input size; time complexity, which is the span of time required by an algorithm to run, and space complexity, which is the physical memory space required by the algorithm to run. This article focuses on time complexity. The time complexities can be divided into 4 main categories, listed here from most efficient to least:

O(1) | Constant

Execution time is proportional to: 1

Algorithms with constant time complexity are not impacted by input size. An example is retrieving an element with a known index from a collection. If the location of the element isn’t modified by the size of the collection, the element will be equally easy to find in a small collection or a large one.


O(log n) | Logarithmic

Execution time is proportional to: logarithm of input size

With logarithmic time complexity, progressively larger inputs yield increasingly smaller performance penalties, making them ideal for traversing large data sets. The classic example is a binary search algorithm. When applied to a sorted collection, a binary search only requires one additional iteration of code for every doubling of the input size


O(n) | Linear

Execution time is proportional to: input size

Linear algorithms have a 1 to 1 relationship between the size of the input and the time it takes for the algorithm to execute. For every added data point in the input, an additional iteration of code must be run.


O(n2) | Quadratic

Execution time is proportional to: square of input size

Of the four main time complexities listed here, quadratic complexity faces the steepest performance penalty for increase in input size. This is exemplified in the nested for-loop pattern. Every time the collection being iterated by the outer loop increases in length, the inner loop (and all the code contained therein) must execute an additional time.


In summary…

“Big O” notation is a way to measure how the performance of an algorithm is impacted when the size of its input increases. The four most common types are constant, logarithmic, linear and quadratic.

Published
Categorized as Post Tagged