Reading:
Scalar `new T` vs array `new T[1]`
the accepted answer suggests that new T[n]
acts as follows (ignoring alignment and the case of n = 0):
- If
n
= 1 or a T is trivially-destructible:- allocates
sizeof(T)
bytes - calls the ctor of
T
on the allocated memory (like placement-new)
- allocates
- If
n
> 1:- allocates
sizeof(size_t + n * sizeof(T))
bytes - sets the first size_t bytes to the value of
n
- calls the the ctor of
T
on each of the n following sequences ofsizeof(T)
bytes (like placement new in a loop)
- allocates
Can I rely on this? That is, if I emulate new T[n]
in my own code as per the above, then pass the result to a delete[]
– will that be well-defined and safe?
Related question: Can I dynamically generate a contiguous sequence of non-default-ctorable objects which can be deleted with delete[]?
13
Can I rely on this? That is, if I emulate new T[n] in my own code as per the above, then pass the result to a delete[] – will that be well-defined and safe?
No, this is all unspecified. The only thing that you can know for sure is that new T[n]
will call operator new[]
with a size argument at least as large as n*sizeof(T)
and will return a pointer that has sufficient space left for the T[n]
object and is suitably aligned for T
. (In the case that T
is unsigned char
, char
or std::byte
there are additional alignment guarantees.) Everything else is an unspecified implementation detail.
If you are willing to rely on a specific ABI guarantee, then you may have more success, although technically still UB per standard. For example in the Itanium ABI see here for a description of when and how array cookies are used. The procedure is not as you suggest. In particular there is no cookie if T
doesn’t have a non-trivial destructor and the usual array deallocation function doesn’t take two arguments. Also, a procedure to correct for alignment is applied and doesn’t have a special case of length 1
.
3
As a general principle avoid making assumptions about how implementations work.
If you absolutely have to and you don’t have detailed specifications of the implementation you’re taking risks.
Even if you do have specifications you’re making your code non-portable (which may matter later and is always undesirable) and still risking implementation changes later.
In the case in hand never pass to delete []
something you didn’t receive from new []’ and do pass it to delete []
exactly once.
What will almost certainly not work is allocating a big array (say) and passing sections of it to delete []. What may work (but is not guaranteed and not recommended) is allocating with malloc() or low-level O/S calls (outside the standard library) and passing that to delete [].
You may want to implement a customer allocator or need to allocate raw char
memory (using new []
or malloc()
or whatever) and use placement new
to construct objects in it and call the destructor directly when you want to dispose (or recycle) it finally releasing that ‘raw’ memory according to how you obtained it. You can even play games allocating memory on the stack and allow return
to recover it.
Premature optimisation is the root of all evil and tuning memory management like this isn’t the top of the list. But it can be useful if your algorithm is allocating and deallocating large numbers of objects.
As a specific note one cause memory churn (lots of new
and delete
) can be avoided by good use of swap and move semantics. They and copy elision were pretty much invented to remove redundant ‘new’/copy/`delete’ sequences and can result in huge improvements that are far better than optimising allocation.
What you’re trying to do may be reasonable. But the way you’re trying to do it is not recommended and there are safer and more portable ways to do it.
Footnote: I’ve ignored zero-length arrays and alignment in this answer as requested by the OP however they are both relevant to custom allocation logic if you go there.