'What is the best, worst and average case running times of an algorithm?

What is the best, worst and average case running times of an algorithm?



Solution 1:[1]

Worst Case Analysis (Usually Done) In the worst case analysis, we calculate upper bound on running time of an algorithm. We must know the case that causes maximum number of operations to be executed. For Linear Search, the worst case happens when the element to be searched (x in the above code) is not present in the array. When x is not present, the search() functions compares it with all the elements of arr[] one by one. Therefore, the worst case time complexity of linear search would be ?(n).

Average Case Analysis (Sometimes done) In average case analysis, we take all possible inputs and calculate computing time for all of the inputs. Sum all the calculated values and divide the sum by total number of inputs. We must know (or predict) distribution of cases. For the linear search problem, let us assume that all cases are uniformly distributed (including the case of x not being present in array). So we sum all the cases and divide the sum by (n+1). Following is the value of average case time complexity.

Best Case Analysis (Bogus) In the best case analysis, we calculate lower bound on running time of an algorithm. We must know the case that causes minimum number of operations to be executed. In the linear search problem, the best case occurs when x is present at the first location. The number of operations in the best case is constant (not dependent on n). So time complexity in the best case would be ?(1)

Solution 2:[2]

Think of an algorithm as a program. This program takes some data, churns on it for a while, and then spits out an answer. Of course, we care about how long the program churns on the data before giving the answer.

But there's a catch: for many algorithms, running time depends on the data itself. Many sorting algorithms are faster for already-sorted data, for instance, and some are slowest for data that's sorted in reverse order.

So let's think about where that data comes from. Maybe your best friend gets to pick the data. Your friend picks data that causes your program to run quickly, and we call that runtime the best case, since the algorithm will never do better than that. Maybe your worst enemy (in textbooks, this is called the adversary) gets to pick the data. Your worst enemy picks data that causes your program to run slowly, and we call that runtime the worst case because the algorithm will never do worse than that. And maybe a giant roulette wheel gets to pick your data. Then, you can run your algorithm on a bunch of roulette wheel data, and average all the runtimes to get the average case time.

Solution 3:[3]

The best time would be like if something is already sorted, then no work needs to be done. The worst case (depends on your algorithm) but think about what would cause your algorithm to take the longest amount of time.

Solution 4:[4]

The running time of an algorithm depends on the size and "complexity" of the input.

For example the best case running time of insertion sort on an input of some size n is proportional to n, i.e. c * n time units for some constant c which depends on the cost (time) of comparison, arithmetic, ... of your computing model. The worst case running time of this algorithm (insertion sort) is proportional to n * n. To make a statement for the average time we need some assumption on the distribution of the input data: E.g. if the input is random data (and therefore likely not sorted) the average running time is again proportional to n*n.

If you know more about the input data, e.g. that it is sorted with decreasing values (and we are sorting with increasing values) the average running time is stille proportinal to n*n, but the constant factor is higher (because the average searching time for the minimum (which will be inserted at the end of the sorted sub-list) takes longer).

Another, more complex example is quicksort: Its average and best running time for random data is proportional to n * log n. The worst caste time is still n * n (often for an already sorted input, but this depends on the algorithm to find the pivot element in the divide step).

Solution 5:[5]

worst case is generally denoted by asymptotic notation i.e big(O)

best case is denoted by asymptotic notation i.e big(OMEGA)

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 samaksh shrivastava
Solution 2 Adam Mihalcin
Solution 3
Solution 4 Stefan
Solution 5 TANYA