'Best Case and Worst Case of an Algorithm: When are they considered to be the "same"?
I am trying to determine the Best Case and Worst Case of the algorithm below but I have been really confused since our professor claims that the Best Case and Worse Case of the algorithm are the "Same"
import math
def sum(Arr, left, right):
if(left>right):
return 0
elif(left == right):
return Arr[left]
else:
mid = left+ math.floor((right-left)/2)
leftSum = sum(Arr, left, mid)
rightSum = sum(Arr, mid+1, right)
return leftSum + rightSum;
def findSum(Arr):
return sum(Arr, 0, len(Arr)-1)
arr = [1,2,3,4,5,7,8,9]
print(findSum(arr));
The algorithm finds the sum of an array by divide-and-conquer, and here were my initial thought:
Best Case: The input array only has 1 element, hence it would meet the second condition and return Arr[left]. No recursion is involved and takes Constant Time(O(1))
Worst Case: The Input array could be arbitrary large(N elements), and each element needs to be scanned once at least, and it would take Linear Time(O(N))
I saw an explanation from another post with similar questions, and the explanation was that no matter how large/small the input array is, the algorithm has to loop over it once, so it's Linear Time for both Worse/Best Case. But I was still confused, the Best Case of the algorithm does not involve any recursion, so can we still claim that it is Linear Time(O(N)) instead of Constant Time(O(1))?
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|
