'Is there a difference between "in-place" and "space complexity O(1)" or do they mean the same?
Do in place and space complexity O(1) mean different things? If yes, can someone explain the difference?
Solution 1:[1]
Space complexity of O(1) is a stronger requirement than in-place, because O(1) implies that the changes are done in place, but not the other way around.
You can make an in-place algorithm that has a space complexity above O(1). For example, the recursive re-heapifying algorithm of Heapsort is in-place, but its recursive implementation without tail call optimization has an O(log N) space complexity.
Solution 2:[2]
I think, Sergey's explanation is a bit confusing.
The terminology "in-place" is used for algorithms which doesn't require additional Data Structures to transform the input. However, to be considered as in-place algorithm, it must have maximum of O(log(N)) space complexity for additional pointers, pivots, stacks pointers. Any algorithm which requires more than O(log(N)) is not considered in-place algorithm
So, O(1) doesn't always mean it is in-place algorithm. For example, think of a method that returns random element from an array. You generate a random number with the index boundary of the input array, which requires O(1) space. And just return the matching value under the random index. But you are not transforming the input array. So, this is not in-place algorithm.
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|---|
| Solution 1 | Sergey Kalinichenko |
| Solution 2 | codebusta |
