Home Navigation

Wednesday 26 October 2016

Asymptotic notation

The running time of an algorithm uses a combination of two ideas.
  • First, we need to determine how long the algorithm takes, in terms of the size of its input. 
  • The second idea is that we must focus on how fast a function grows with the input size. We call this the rate of growth of the running time. 

Here's a chart showing values of 6n^2 and 100n + 300 for values of n from 0 to 100:

We'll see three forms of asymptotic notation: big-Theta Θ notation, big-O notation and big-Omega Ώ notation.

Big-Θ (Big-Theta) notation: When we say that a particular running time is Θ(n), we're saying that once n gets large enough, the running time is at least k1.n and at most k2.n for some constants k1 and k2. Here's how to think of Θ(n):

When we use big-Θ notation, we're saying that we have an asymptotically tight bound on the running time. "Asymptotically" because it matters for only large values of n. "Tight bound" because consider the running time to within a constant factor above and below.

Here's a list of functions in asymptotic notation that we often encounter, listed from fastest to slowest in performance.
  • Θ(1) 
  • Θ(lgn) 
  • Θ(n) 
  • Θ(nlgn) 
  • Θ(n^2) 
  • Θ(n^2lgn) 
  • Θ(n^3) 
  • Θ(2^n)
Note that an exponential function a^n, where a>1, grows faster than any polynomial function n^b, where b is any constant.

Big-O notation: We use big-Θ notation to asymptotically bound the growth of a running time to within constant factors above and below. Sometimes we want to bound from only above. For example, although the worst-case running time of binary search is Θ(lgn), it would be incorrect to say that binary search runs in Θ(lgn) time in all cases. What if we find the target value upon the first guess? Then it runs in Θ(1) time. The running time of binary search is never worse than Θ(lgn), but it's sometimes better. It would be convenient to have a form of asymptotic notation that means "the running time grows at most this much, but it could grow more slowly." We use "big-O" notation for just such occasions.

We say that the running time is "big-O of f(n)" or just "O of f(n)." We use big-O notation for asymptotic upper bounds, since it bounds the growth of the running time from above for large enough input sizes.



Now we have a way to characterize the running time of binary search in all cases. We can say that the running time of binary search is always O(lgn). We can make a stronger statement about the worst-case running time: it's Θ(lgn). But for a blanket statement that covers all cases, the strongest statement we can make is that binary search runs in O(lgn) time.

Big-Ω(Big-Omega) notation:  Sometimes, we want to say that an algorithm takes at least a certain amount of time, without providing an upper bound. We use big-Ω notation; that's the Greek letter "omega."

If a running time is Ω(f(n)), then for large enough nn, the running time is at least k.f(n) for some constant k. Here's how to think of a running time that is Ω(f(n)):

We say that the running time is "big-Ωf(n)." We use big-Ω notation for asymptotic lower bounds, since it bounds the growth of the running time from below for large enough input sizes.

:Common Data Structure Operations:

Data Structure Time Complexity Space Complexity
Average Worst Worst
Access Search Insertion Deletion Access Search Insertion Deletion
Array Θ(1) Θ(n) Θ(n) Θ(n) O(1) O(n) O(n) O(n) O(n)
Stack Θ(n) Θ(n) Θ(1) Θ(1) O(n) O(n) O(1) O(1) O(n)
Queue Θ(n) Θ(n) Θ(1) Θ(1) O(n) O(n) O(1) O(1) O(n)
Singly-Linked List Θ(n) Θ(n) Θ(1) Θ(1) O(n) O(n) O(1) O(1) O(n)
Doubly-Linked List Θ(n) Θ(n) Θ(1) Θ(1) O(n) O(n) O(1) O(1) O(n)
Skip List Θ(log(n)) Θ(log(n)) Θ(log(n)) Θ(log(n)) O(n) O(n) O(n) O(n) O(n log(n))
Hash Table N/A Θ(1) Θ(1) Θ(1) N/A O(n) O(n) O(n) O(n)
Binary Search Tree Θ(log(n)) Θ(log(n)) Θ(log(n)) Θ(log(n)) O(n) O(n) O(n) O(n) O(n)
Cartesian Tree N/A Θ(log(n)) Θ(log(n)) Θ(log(n)) N/A O(n) O(n) O(n) O(n)
B-Tree Θ(log(n)) Θ(log(n)) Θ(log(n)) Θ(log(n)) O(log(n)) O(log(n)) O(log(n)) O(log(n)) O(n)
Red-Black Tree Θ(log(n)) Θ(log(n)) Θ(log(n)) Θ(log(n)) O(log(n)) O(log(n)) O(log(n)) O(log(n)) O(n)
Splay Tree N/A Θ(log(n)) Θ(log(n)) Θ(log(n)) N/A O(log(n)) O(log(n)) O(log(n)) O(n)
AVL Tree Θ(log(n)) Θ(log(n)) Θ(log(n)) Θ(log(n)) O(log(n)) O(log(n)) O(log(n)) O(log(n)) O(n)
KD Tree Θ(log(n)) Θ(log(n)) Θ(log(n)) Θ(log(n)) O(n) O(n) O(n) O(n) O(n)

Array Sorting Algorithms 
Algorithm Time Complexity Space Complexity
Best Average Worst Worst
Quicksort Ω(n log(n)) Θ(n log(n)) O(n^2) O(log(n))
Mergesort Ω(n log(n)) Θ(n log(n)) O(n log(n)) O(n)
Timsort Ω(n) Θ(n log(n)) O(n log(n)) O(n)
Heapsort Ω(n log(n)) Θ(n log(n)) O(n log(n)) O(1)
Bubble Sort Ω(n) Θ(n^2) O(n^2) O(1)
Insertion Sort Ω(n) Θ(n^2) O(n^2) O(1)
Selection Sort Ω(n^2) Θ(n^2) O(n^2) O(1)
Tree Sort Ω(n log(n)) Θ(n log(n)) O(n^2) O(n)
Shell Sort Ω(n log(n)) Θ(n(log(n))^2) O(n(log(n))^2) O(1)
Bucket Sort Ω(n+k) Θ(n+k) O(n^2) O(n)
Radix Sort Ω(nk) Θ(nk) O(nk) O(n+k)
Counting Sort Ω(n+k) Θ(n+k) O(n+k) O(k)
Cubesort Ω(n) Θ(n log(n)) O(n log(n)) O(n)

Ref: Khan Academy

No comments:

Post a Comment