The questions around recursion here made me wonder if Daml performs any tail call optimizations?
Short answer, Yes.
Since around November 2020, with this PR:
We have some test cases here:
For implementation details see:
@Nick_Chapman This is awesome. Iāll try to grok the Speedy scala. But in the meantime what about allocations that occur at the tail position?
my_map : (a -> b) -> [a] -> [b]
my_map f [] = []
my_map f (h::t) = f h :: my_map f t
Unfortunately my_map
is not tail recursive, so no tail call optimization will apply. The speedy execution stack will grow to the length of the input list. Fortunately, the speedy execution stack lives on the JVM heap, so there will be no JVM stack overflow. (Never overflowing the JVM stack during Daml execution is a property the speedy engine aims to ensure!)
As far as I am aware, the only way to code a tail-recursive definition of list mapping is to make two passes over the list. Something like this (and assuming reverse
is tail recursive):
myMapAcc : [b] -> (a -> b) -> [a] -> [b]
myMapAcc acc _ [] = reverse acc
myMapAcc acc f (h::t) = myMapAcc (f h :: acc) f t
myMap : (a -> b) -> [a] -> [b]
myMap f xs = myMapAcc [] f xs
Thank you!
Just for fun, you can also implement a tail-recursive map
using foldl
as follows:
map : (a -> b) -> [a] -> [b]
map f xs = reverse $ foldl (\t h -> f h :: t) [] xs
(This unpacks to the same as @Nick_Chapmanās code, but perhaps a little more efficient because foldl
is a built-in.)
Or more point-free:
map : (a -> b) -> [a] -> [b]
map f = reverse . foldl (\t h -> f h :: t) []
For even more fun, and contrary to my earlier reply, it is possible to define a tail recursive version of map, which makes just a single pass over the list. And we can still use foldl
if we want:
onePassMap : (a -> b) -> [a] -> [b]
onePassMap f xs = foldl (\t h -> t . (f h ::)) identity xs []
This is pretty silly, and if you squint you can still see a 2nd pass
I see 2nd passes everywhere!
It looks like your my_map
function is tail recursive modulo cons, which, if i am not mistaken, would be tail-call optimized in Haskell.
I know, but who cares about Haskell, Daml forever!
Haskell, being lazy, has a different evaluation model to Daml, which being strict is more akin to Ocaml or Scheme. I think the definition/implementation of tail call optimization is pretty clear for strict languages. But for lazy languages, I am not really sure I appreciate all the details.
The Haskell Wiki (Tail recursion - HaskellWiki) says this:
āA recursive function is tail recursive if the final result of the recursive call is the final result of the function itself. If the result of the recursive call must be further processed (say, by adding 1 to it, or consing another element onto the beginning of it), it is not tail recursiveā
which might suggest that Tail Recursion Modulo cons isnāt supported in Haskell.
However, the page concludes:
āIn Haskell, the function call model is a little different, function calls might not use a new stack frame, so making a function tail-recursive typically isnāt as big a dealābeing productive, via guarded recursion, is more usually a concern.ā
so, ā Any links appreciated!
At one point, I had read some documentation that convinced me that Haskell was tail recursive modulo cons. I actually embarked on a little project (aborted blog post) to show the differences between tail recursion in Scala and Haskell. I never finished it because I was not able to get Haskell to āstack overflowā in any circumstances! Maybe I will try this exercise again if/when I am a little more familiar with Haskell runtimeā¦
Haskell has an evaluation stack not a call stack and iirc it defaults the stack size to half of your available ram so you have to try fairly hard