'Asymptotic behavior of algorithms and Big O comparison [duplicate]

I am a little confused in a specific case with the Big O notation and the Asymptotic behavior of algorithms. While I was reading the blog http://discrete.gr/complexity/ that describes these notations very well I came across this statement whether it is true or false:

A O( n ) algorithm is Θ( 1 )

The answer says that this may or may not be true depending on the algorithm. In the general case it's false. If an algorithm is Θ( 1 ), then it certainly is O( n ). But if it's O( n ) then it may not be Θ( 1 ). For example, a Θ( n ) algorithm is O( n ) but not Θ( 1 ).

I am trying a little hard to comprehend this answer. I understand that Big O implies that a program can asymptotically be no worse. So I interpret that above statement where O( n ) is worse than Θ( 1 ) and is true.

Can someone explain with an example?



Solution 1:[1]

  • If you know that a task requires exactly one week (= ?(1)), you can surely say that you can do it in at most a year (= O(n)).

  • If you know that a task requires at most a year (= O(n)), you cannot be sure that you'll do it in a week (= ?(1)).

  • If you know that a task requires exactly one year (= ?(n)), you could also say that it requires at most a year (= O(n)), and you're sure it cannot be done in a week (? ?(1)).

Solution 2:[2]

The example basically tries to cover two ideas:

  1. If an algorithm is ?(f(n)), it means it is both ?(f(n)) and O(f(n)). It is asymptotically neither better nor worse than f(n), it has the same asymptotic behavior.
  2. Functions that areO(1) can be seen as a subset of functions that are O(n). This can be generalized but no need to get too formal here I guess to avoid mathematically incorrect disasters from my side. It means if it can never do worse than a constant, then it can never do worse than a linear function.

So the idea is to break up the ? to O and ? notations. And then identify which is subset of which.

Also it's nice to notice that ANY algorithm (that has a non null complexity at least) is ?(1). An algorithm can never do better than a constant.

Example Mammals are humans.

The answer: no, not in general. All humans are mammals though. Humans are a subset of mammals.

I tried to think of another example but it was either too technical or not clear enough. So I'll leave here this not so well drawn but rather clear graph here. I found by googling o omega theta and looking for images. There are a few other good images too.

graph
(source: daveperrett.com)

Basically, for each function f in this graph: Anything above it is ?(f(n)) because it can never do better than f, it can never be below it as n increases; Anything below it is O(f(n)) because it can never be worse than f, it can never be above it as n increases.

The graph doesn't show well the insignificance of constants asymptotically. There are other graphs that show it better. I just put it here because it had lots of functions at a time.

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 Eric Duminil
Solution 2 Glorfindel