I am about to make two assumptions. Please correct me if they’re wrong:
- There isn’t a recursive algorithm without an iterative equivalent.
- Iteration is always cheaper performance-wise than recursion (at
least in general purpose languages such as Java, C++, Python etc.).
If it’s true that recursion is always more costly than iteration, and that it can always be replaced with an iterative algorithm (in languages that allow it) – than I think that the two remaining reasons to use recursion are: elegance and readability.
Some algorithms are expressed more elegantly with recursion. E.g. scanning a binary tree.
However apart from that, are there any reasons to use recursion over iteration? Does recursion have advantages over iteration other than sometimes elegance and readability?
5
Well, it’s usually less code.
And to the extent that it’s less code, it’s less error-prone.
In particular, recursion is very beneficial when the iterative solutions requires that you simulate recursion with a stack. Recursion acknowledges that the compiler already manages a stack to accomplish precisely what you need. When you start managing your own, not only are you likely re-introducing the function call overhead you intended to avoid; but you’re re-inventing a wheel (with plenty of room for bugs) that already exists in a pretty bug-free form.
IMO, only eschew recursion if it can be done naturally and easily without a stack. (Or other non-scalar state and/or scope management structure.)
4
Recursion uses the call stack to store function call returns. Function state is stored in between calls.
Iteration must also use a stack or some similar mechanism to store intermediate states, except that you create the stack yourself. Unless, of course, you can find a substitute algorithm that doesn’t require such state storage.
Recursion is only more costly if you overflow the stack. In a sense, iteration is going to be more costly (in those algorithms that lend themselves to recursion), because you’re re-creating the state storage mechanism that recursion already provides.
13
First, while it is true that there always exists an equivalent iterative equivalent to any recursive algorithm, it is not necessarily the case that the iterative equivalent is better, for any reasonable definition of “better”.
For some algorithms, the iterative equivalent winds up just simulating the recursive algorithm, complete with simulated parameter and local variable stacking and unstacking. Consider Ackermann’s Function. Consider Huffman coding. In these cases, there is very little gained by (re)writing the explicit stack operations.
Second, it is not necessarily the case that recursion is always more expensive than iteration. Consider tail recursion, and read the “Lambda: The Ultimate…” papers.
(Yes, I know this needs expansion. I have to go to a doctor’s appointment right now.)
OK, there are some things that can be said.
Tony Hoare originally described Quicksort iteratively, and he reported that it was very difficult to explain. Explained recursively, it is SIMPLE. See Quicksort in Haskell for the details. If the array has length equal to one, you are finished, otherwise, you actually have to sort it. First, you pick a pivot element. Traditionally, this is the first element, but any element can be used.
Next, you take a pass over the array, to form three subarrays. The first is all elements less than the pivot, the second is all elements equal to the pivot, and the third is all elements greater than the pivot. (If the array element values are required to be unique, then the length of the second subarray is equal to one.) Now quicksort the first subarray, and then quicksort the second subarray.
Binary search is easiest to understand when presented recursively, and it is in fact tail-recursive. You probe the middle element of the array. If it is equal to the value you are looking for, you are done. If the middle element is greater than the value you seek, then you know that the desired value must lie “to the left”, and you search the subarray before the middle element, otherwise you search the subarray after the middle element. In both cases, you know that the middle element is not the target, so you leave it out. You either find your target, or you run out of array to search. At that point, you bail all the way out, and you’re done. In the year 2014, never mind 2019, any self-respecting compiler knows how to do tail recursion optimization, even Java compilers. (Note: The classic Java virtual machine does not support general tail call optimization, for lack of a general GOTO operation, but that does not affect tail recursion optimization.)
The real problem, a lot of places, is that the people running the show never learned about tail recursion optimization, and so they ban all recursion. I ran into this at Nortel Networks, and had to write a full page of comments explaining it and some related concepts AND show them the assembly language listings that proved the compiler was NOT actually generating recursive calls. Nortel is gone now, but those managers still exist in a lot of places.
Hope this helps.
1