Big Oh notation is used in computer science to decribe the complexity of an algorithm in terms of time and space (memory). Big Oh notation describes the worst case scenario of what happens when an algorithm is run with N values and is really only useful when talking about large sets of data.
constant | logarithmic | linear | quadratic | cubic | ||
---|---|---|---|---|---|---|
n | O(1) | O(log N) | O(N) | O(N log N) | O(N2) | O(N3) |
1 | 1 | 1 | 1 | 1 | 1 | 1 |
2 | 1 | 1 | 2 | 2 | 4 | 8 |
4 | 1 | 2 | 4 | 8 | 16 | 64 |
8 | 1 | 3 | 8 | 24 | 64 | 512 |
16 | 1 | 4 | 16 | 64 | 256 | 4,096 |
1,024 | 1 | 10 | 1,024 | 10,240 | 1,048,576 | 1,073,741,824 |
1,048,576 | 1 | 20 | 1,048,576 | 20,971,520 | 1012 | 1016 |
Explanation:
O(1) = irrespective of changing the size of the input, time stays the sameO(N) = as you increase the size of input, the time taken to complete operations scales linearly with that size
O(log N) = growth curve that peaks at the beginning and slowly flattens out as the size of the data sets increase
O(N2) = growth will double with each additional element in the input
Some other links:
http://science.slc.edu/~jmarshall/courses/2002/spring/cs50/BigO/index.html
No comments:
Post a Comment