I recently wrote a fairly complex C++ meta function that boils down to:
template <size_t N, typename val>
struct Rec {
using type = typename std::conditional<N == 0,
val,
typename Rec<N - 1, val>::type>::type;
};
Both Clang and G++ barf on this type of recession, stating that, “Template instantiation depth exceeds maximum of X”. I quickly rewrote the program and fixed the problem, but it got me thinking about the evaluation strategy of C++ template parameters.
Is there anything in the C++ standard that would prevent templates from using call by need evaluation of template parameters, or is this limitation purely an implementation defect?
2
To instantiate std::conditional<N==0, val, typename Rec<N-1, val>::type>
, the compiler needs to prove that both val
and Rec<N-1, val>::type
evaluate to a type and that N==0
evaluates to a constant expression that can be used in a boolean context.
If those conditions are not met, then the program is not well-formed and requires a diagnostic.
It is also only after evaluating the template arguments that the compiler can start looking for specializations that match, because for all the compiler knows, there might be a specialization of std::conditional
for the parameters true
,val
,Rec<-1,val>::type
that is more specialized than the default one for std::conditional<true, T, F>
.
This is also why you need a specialization of Rec
to stop the recursion: The condition used in std::conditional
is only checked after the recursion has run its course and hit a terminating condition.
The compiler would be perfectly happy if you added the specialization
template <class val>
class Rec<-42, val> {
typedef void type;
};
and give you the expected results (even if val != void).