Wednesday, March 8, 2017

Best Time Complexity Calculation

What are some easy ways to understand and calculate the time complexity of algorithms?


You'd already be aware of Big-O and Theta notations. Big O gives the upperbound - the worst possible execution time of an algorithm. And is the converse of O, ie, the lowest estimate. is somewhere inbetween.

Big O is the most commonly used term. Most of the time we want to find the maximum time an algorithm would take. Let me show some examples.



= = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =

Understanding Time Complexity

Let us consider there's a small piece of code (maybe just a single line) that takes one second on a slow computer. This piece of code will be used on a list of items for processing; something like an array waiting to be searched or sorted.

If you have designed an algorithm that is O(1), it means,
If the array contains just a single item, it will take 1 second.
If array has 10 items, it will still take 1 second to finish with all of them.
If it has 100, again 1 second only.
You see, the algorithm you designed is great even for the large arrays.

Let's proceed to quite larger and practical time complexities. Now you have created a similar algorithm, but in O(n) this time.
If array has one item, it will take 1 second. Still seems okay.
If we have 10 items, it will take 10 seconds. Did you see the difference?
Now it we have 100 items, it will take 100 seconds. Going damn bad.
What will happen to the longer lists?

You would have seen, or even designed many algorithms that are of O(n^2) order.
Again, to process a single item, you take 1 second.
With 10, it will take 100 seconds to process the whole array.
And what if you have 100? It'll take 10000 seconds.
In practical cases we may have really big arrays containing millions of items. Such algorithms may make you wait an eternity.

= = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =

Calculating Time Complexity

If there is a single iteration, and the iterative variable is incrementing linearly then it's O(n) e.g.
for(i=0;i<n;i++) //O(n)

for(i=0;i<n;i = i + 4) // still O(n)

If the iterative variable is incremented geometrically, then it's O(log n)
e.g
for(i=1;i<n;i = i * 2) //O(log n)

Note that, the implementations don't have to be using loops, they maybe implemented using recursion.

If there is nested loop, where one has a complexity of O(n) and the other O(logn), then overall complexity is O(nlogn);
e.g
for(i=0;i<n;i++) // O(n)
{
   for(j=1;j<n;j=j*3) // O(log n)
}
//Overall O(nlogn)

This is only a finger cross guideline. In general, you have to have good concept to derive the complexity.



= = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =

In the definition we say,  f(x) ∈ O(g(x)) as there exists c > 0 (e.g., c = 1) andx0 (e.g., x0 = 5) such that f(x) < cg(x) whenever x > x0. You can have a look at the graph above (from wikipedia). Till point x0, we may not have the straightforward behavior. We are to notice larger numbers. So we keep such a limit.

A small comparison of time complexity orders is in this graph below. You can see how bad the higher orders go. Notice the Y axis is a log scale.


= = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =

A great explanation is given by various writers to answer a related question in Stack Exchange. You'll find it interesting.
http://stackoverflow.com/questio...
You should also have a look at
How does one know which notation of time complexity analysis to use?
Graph Reference : http://n0b3l1a.blogspot.in/2010/01/bigo-time-complexity.html
Time Complexity Explanation Reference : http://stackoverflow.com/questio...


All credit goes to Aditya Joshi

No comments:

Post a Comment