Lets say I get N
objects as input, and I need to rearrange them into a different data structure. This means the space complexity of my algorithm will be O(N)
. But what if I replace the objects with pointers to objects? I’ll still have N
pointers, but will it still be O(N)
space complexity despite the fact that there is clearly going to be less memory taken?
3
Let’s get philosophical for a minute.
Has the number of objects changed from N? No? Then it is still O(N) because you started with the premise that N was the number of input objects and that was what was chosen to measure against.
Now you could start with a different premise for a given data structure, say in a case where the individual data structures is actually a complex unit itself. Sometimes that might make sense.
Now for examples.
With regards to your original question, suppose you have a list of integers to sort. If you state that your space complexity is O(N), it doesn’t matter whether you choose a 16 bit integer or a 64 bit integer or a 32 bit pointer to an 8 bit integer from a “big-Oh” perspective – it’s still O(N) because you’re measuring the number of items to sort.
But now suppose you wish to upgrade to support any length of integer rather than bounding the problem. Now, technically your space complexity is still O(N). But if you start dealing with large numbers, variation in number size eventually has to lead to variation in underlying data structure size. This means that measuring with respect to N may no longer make sense, and you as an intelligent programmer decide to pick something different to measure and analyze. Maybe now you want to start to think of space complexity with respect to digits instead!
1