Remix.run Logo
hinkley 5 days ago

Practically the day after I learned about tail recursion in CS class, I learned that almost all recursive calls can be translated to iteration, that in many cases the iterative version is easier to scan, is as fast if not faster, and that they can usually handle much much larger inputs than recursion due to avoiding stack overflow.

Tail recursion is meant to fix the latter. But what we mean to happen and what actually happens ain't ever exactly similar.

Tail recursion IME is a bigger foot gun than relying on someone to add a new conditional branch at the end of a block in an iterative algorithm without fucking it up in the process. And iteration responds generally better to Extract Function. And while I can think of counter cases easily enough, in the large iteration is less work and less vigilance. And you cannot scale a project up without the vigilance requirement amortizing basically to 0 per line of code.

pdpi 5 days ago | parent | next [-]

> Tail recursion IME is a bigger foot gun

This is true for some languages, but not all.

E.g. scala has @tailrec annotations, which make it a compile error for the annotated function to not be tail recursive. Clojure doesn't have tail call elimination, but has the `recur` special form for explicit recursion that is guaranteed to not consume any stack space.Rust has reserved the `become` keyword that will eventually guarantee tail call elimination (So pretty similar to Clojure's recur, but I think Rust's version will allow mutual recursion)

Zig goes the whole hog, and has `@call(modifier, fn, args)`, where `modifier` can be things like compile_time, always_tail, always_inline, never_tail, never_inline, and a bunch other desirable guarantees you might want.

aw1621107 5 days ago | parent | next [-]

> Rust has reserved the `become` keyword that will eventually guarantee tail call elimination (So pretty similar to Clojure's recur, but I think Rust's version will allow mutual recursion)

Support got added to nightly quite recently, in fact - on 2025-07-31 [0].

[0]: https://github.com/rust-lang/rust/pull/144232

LandR 4 days ago | parent | prev | next [-]

Clojure has mutual recursion via (trampoline ...)

https://clojuredocs.org/clojure.core/trampoline

phh 4 days ago | parent | prev [-]

> > Tail recursion IME is a bigger foot gun

> This is true for some languages, but not all.

Useless anecdote that tail-recursive can be a foot gun even in Scala.

I did a (screening) job interview at Datadog, they asked for "give the spare change back on that amount of money" exercise (simple variant), "in whichever language you want". I did my implementation in tail-recursive Scala (with the annotation). I ended up trying to explain that tail-recursivity doesn't explode in memory for the rest of the call (and failed)

pdpi 4 days ago | parent [-]

> I ended up trying to explain that tail-recursivity doesn't explode in memory for the rest of the call (and failed)

I count that as a successful interview – They sent an interviewer who doesn't understand how tail recursion enables tail call elimination, and is either unwilling or unable to listen when is is explained to them. Sounds like the company would be a bad fit for somebody whose go-to approach is to implement the solution that way.

hinkley 4 days ago | parent [-]

I've always struggled with not seeing this as 'sour grapes' on my part when I think like this. No matter how many friends and peers tell me I dodged a bullet. Even with my experiences with ignoring red flags. Rejection still sucks.

barmic12 3 days ago | parent [-]

This is not a reject, is a not match. Write code like you want to read and if it’s not OK for a company it’s just not the good company (like if you are scala programmer and they want only ASM coders). Be proud of your code.

tombert 5 days ago | parent | prev | next [-]

In my mind, the biggest advantage to using tail recursion over vanilla loops is the ability to keep using persistent data structures without any (apparent) mutation.

At least in theory, a tail recursive call will be converted into a dumb jump, so there shouldn't be a performance penalty, but since from your code's perspective you're passing in the stuff for the next iteration, you can keep using the pretty and easy-to-reason-about structures without creating any kind of mutable reference.

I'm not 100% sold on tail recursion in a broader sense, but at least with Clojure's loop/recur stuff it is kind of cool to be able to keep using persistent data structures across iterations of loops.

weitendorf 5 days ago | parent | next [-]

What makes tail recursion "special" is that there exists a semantically equivalent mutable/iterative implementation to something expressed logically as immutable/recursive. [0]

Of course, this means that the same implementation could also be directly expressed logically in a way that is mutable/iterative.

func pow(uint base, uint n): n == 0 ? return 1 : return n * pow(base, n-1)

is just

func pow(uint base, uint n): uint res = 1; for(i=0; i<n; i++){ res *= n} return res

There is no real "advantage" to, or reason to "sell" anybody on tail call recursion if you are able to easily and clearly represent both implementations, IMO. It is just a compiler/runtime optimization, which might make your code more "elegant" at the cost of obfuscating how it actually runs + new footguns from the possibility that code you think should use TCO actually not (because not all immutable + recursive functions can use TCO, only certain kinds, and your runtime may not even implement TCO in all cases where it theoretically should).

As an aside, in C++ there is something very similar to TCO called copy-elision/return-value-optimization (RVO): [1]. As with TCO it is IMO not something "buy into" or sell yourself on, it is just an optimization you can leverage when structuring your code in a way similar to what the article calls "continuation passing style". And just like TCO, RVO is neat but IMO slightly dangerous because it relies on implicit compiler/runtime optimizations that can be accidentally disabled or made non-applicable as code changes: if someone wanders in and makes small semantic to changes to my code relying on RVO/TCO for performance they could silently break something important.

[0] EXCEPT in practice all implementation differences/optimizations introduce observable side effects that can otherwise impact program correctness or semantics. For example, a program could (perhaps implicitly) rely on the fact that it errors out due to stack overflow when recursing > X times, and so enabling TCO could cause the program to enter new/undesirable states; or a program could rely on a functin F making X floating point operations taking at least Y cycles in at least Z microseconds, and not function properly when F takes less than Z microseconds after enabling vectorization. This is Hyrum's Law [2].

[1] https://en.wikipedia.org/wiki/Copy_elision#Return_value_opti...

[2] https://www.hyrumslaw.com/

nothrabannosir 5 days ago | parent | next [-]

> func pow(uint base, uint n): n == 0 ? return 1 : return n * pow(base, n-1)

Nitpick: that’s not tail recursive. Something like def pow(base, n, acc=1) = match n case 0 => acc; default => pow(base, n-1, acc*base)

pyrale 4 days ago | parent [-]

Given that `base` is never used, I suspect that's not the only issue :)

eru 5 days ago | parent | prev | next [-]

> Of course, this means that the same implementation could also be directly expressed logically in a way that is mutable/iterative.

Yes, compilers exist.

> There is no real "advantage" to, or reason to "sell" anybody on tail call recursion if you are able to easily and clearly represent both implementations, IMO.

Avoiding mutation avoids headaches.

> [0] EXCEPT in practice all implementation differences/optimizations introduce observable side effects that can otherwise impact program correctness or semantics. For example, a program could (perhaps implicitly) rely on the fact that it errors out due to stack overflow when recursing > X times, and so enabling TCO could cause the program to enter new/undesirable states; or a program could rely on a functin F making X floating point operations taking at least Y cycles in at least Z microseconds, and not function properly when F takes less than Z microseconds after enabling vectorization. This is Hyrum's Law [2].

These programs are likely not standards compliant. (And that's not just true for the C++ standard but for basically any language with a standard.)

> And just like TCO, RVO is neat but IMO slightly dangerous because it relies on implicit compiler/runtime optimizations that can be accidentally disabled or made non-applicable as code changes:

Who says TCO has to be always implicit? In eg Scheme it's explicit in the standard, and in other languages you can have annotations.

(Whether a call is in tail position is more or less a property you can ascertain syntactically, so your annotation doesn't even have to be understood by the compiler: the linter is good enough. That would catch your 'accidental changes' part.

Things get more complicated, when you have implicit clean-ups happen after returning from the function. Like calling destructors in C++. Then the function call is arguably not in a tail position anymore, and so TCO doesn't apply. Whether this is detectable reliably at compile time depends on the details of your language.)

ahartmetz 4 days ago | parent [-]

Avoiding mutation avoids headaches, but the real headaches are actions (mutations) at a distance, and tail recursion vs loops make no difference there.

eru 4 days ago | parent [-]

No mutation (and no side-effects in general) means no action at a distance.

Loops need mutation to work. The mutation might be benign or it might be headache-inducing. Without further analysis you don't know. If there's no mutation, no need for analysis. Lowering the cognitive load.

Well, replacing loops with tail calls is one tool to get rid of some mutations.

It's basically the same reasoning people give you for not using goto in your program: yes, there are ways to use gotos responsible, and there's ways to end up with spaghetti code.

If you use goto, the reader has to analyse and figure out whether you made spaghetti (and the reader can never be quite sure she understood everything and didn't miss an important caveat). If you express yourself without goto, the need for that analysis largely goes away.

ahartmetz 3 days ago | parent [-]

I have a similar attitude about const in C++, I use it almost whenever possible. Less (possible) mutation to worry about. But I also let go of it fairly easily when it gets in the way. And... well... tail recursion doesn't feel very idiomatic in C++.

If you iterate by const reference over a const container, and you make every assign-once variable in the loop body const (or in Rust: just not mut), is there any advantage to tail recursion except someone on the internet said it's the proper functional style?

I think functional programming contains some great ideas to keep state under control, but there is no reason to ape certain superficial aspects. E.g. the praise of currying in Haskell tutorials really grinds my gears, I think it's a "clever" but not insightful idea and it really weirds function signatures.

eru 3 days ago | parent [-]

> If you iterate by const reference over a const container, and you make every assign-once variable in the loop body const (or in Rust: just not mut), is there any advantage to tail recursion except someone on the internet said it's the proper functional style?

Function calls can express all kinds of useful and interesting control flow. They are so useful that even people who love imperative programming use functions in their language. (Early and primitive imperative programming languages like very early Fortran and underpowered dialects of BASIC didn't have user defined functions.)

So we established that you want functions in your language anyway. Well, and once you properly optimise function calls, what's known as tail call optimisation, you notice that you don't need no special purpose loops (nor goto) built into your language. You can define these constructs as syntactic sugar over function calls. Just like you can define other combinators like map or filter or tree traversals.

See how in the bad old days, Go had a handful of generic functions and data structures built-in (like arrays), but didn't allow users to define their own. But once you add the ability for users to define their own, you can remove the special case handling.

And that's also one thing C++ does well: as much as possible, it tries to put the user of the language on the same footing as the designers.

When 'map' or 'filter' are the best construct to express what you want to say, you should use them. When a 'for'-loop is the best construct, you should use it. (And that for-loop could be defined under the hood as syntactic sugar on top of function calls.) The scenario your concocted is exactly one where a foreach-loop shines.

Though to be a bit contrarian: depending on what your loop does, it might be useful to pick an ever more constrained tool. Eg if all you do run one action for each item, with no early return and you are not constructing a value, you can use something like Rust's 'foreach' (https://docs.rs/foreach/latest/foreach/). If you transform a container into another container (and no early return etc), you can use 'map'. Etc.

The idea is to show the reader as much as possible what to expect without forcing them to dive deep into the logic. The transformation in a 'map' might be very complicated, but you know the shape of the result immediately from just spying that it's a 'map'.

When you see the for-loop version of the above, you have to wade through the (complicated) body of the loop just to convince yourself that there's no early return and that we are producing a new container with exactly the same shape as the input container.

> I think functional programming contains some great ideas to keep state under control, but there is no reason to ape certain superficial aspects. E.g. the praise of currying in Haskell tutorials really grinds my gears, I think it's a "clever" but not insightful idea and it really weirds function signatures.

Yes, that's mixing up two separate things. Haskell doesn't really need currying. All you need for Haskell to work as a language is a convenient way to do partial application. So if Haskell (like OCaml) used tuples as the standard way to pass multiple arguments, and you had a syntactically convenient way to transform the function (a, b, c) -> d into (b, c) -> d by fixing the first argument that would get you virtually all of the benefits Haskell gets from pervasive currying without the weird function signatures.

In practice, people tend to get used to the weird function signatures pretty quickly, so there's not much pressure to change the system.

pklausler 2 days ago | parent [-]

Even the first release of FORTRAN had statement functions.

gleenn 5 days ago | parent | prev | next [-]

I would argue having the parameters that change during the loop be explicit is a very nice advantage. Agree that the things can be equivalent in terms of execution but the readability and explicitness, and the fact that all the parameters are listed in the same place is great.

weitendorf 5 days ago | parent [-]

Agreed. Some people really like FP a lot, and I think it's underrated that the kinds of functions where TCO is applicable tend to be so simple that they are not really that inelegant when expressed imperatively. My true opinion is that relying on TCO is usually choosing ideological adherence to FP (or "code that looks cooler") over reliability/performance/communication.

That said, just as I'd expect experienced C++ programmers to be able to recognize others' code using RVO and be careful not to restructure things to break it, I'd concede that experienced FP programmers might be unlikely to accidentally break others' TCO. It's just that ad absurdum you cannot expect every developer to be able to read every other developers' mind and recognize/workaround all implicit behavior they encounter.

eru 5 days ago | parent [-]

> [...] and I think it's underrated that the kinds of functions where TCO is applicable tend to be so simple that they are not really that inelegant when expressed imperatively.

I suspect you are only thinking of patterns that are basically equivalent to a loop. I might agree with that.

TCO really shines when you want to implement state machines. See eg https://news.ycombinator.com/item?id=43076088 for an example where using tail calls in Python's interpreter loop gives nice performance benefits. Similar also for LUA.

> [...] I'd concede that experienced FP programmers might be unlikely to accidentally break others' TCO.

Compiler (or linter) checkable annotations would help here. You are right that you want to make it possible for programmers to statically assert somehow that their function call is a tail call.

Btw this reminds me: recursion isn't just something you do in computation, but also in data structures (amongst other things). In eg Rust the members of your data structure are typically just laid out one after another, but when you have a recursive structure (and in certain other cases) you need to box it, otherwise you'd get an infinitely large data structure. Boxing is more or less equivalent to using indirection via a pointer.

However, unboxing isn't really like TCO. It's more like in-lining.

vlovich123 4 days ago | parent | prev | next [-]

RVO and URVO are slightly different from TCO in that’s the language guarantees that they are required to happen. You are correct though that small changes can accidentally turn it off unintentionally.

connicpu 4 days ago | parent | prev | next [-]

With Named RVO (I.e. you explicitly `return named_variable;`) copy-elision is actually guaranteed by the standard. I believe returning the return value of a function call is also guaranteed to not do a copy constructor. Anything else is compiler and optimization level dependent.

gpderetta 4 days ago | parent | prev [-]

To nitpick a bit, NRVO is an optimization as there is no guarantee that it will be performed, but RVO is now guaranteed (you can now return temporary non-copyable /non-movable objects for example).

ramchip 5 days ago | parent | prev | next [-]

I think you're confusing mutation and variable reassignment?

tombert 5 days ago | parent | next [-]

I'm saying that if I do a regular loop, something needs to explicitly mutate, either a reference or a value itself.

`for (var i = 0; i < n, i++)`, for example, requires that `i` mutate in order to keep track of the value so that the loop can eventually terminate. You could also do something like:

    var i = new Obj(); 
    while (!i.isDone()) {
        //do some stuff
        i = new Obj(); 
    }
There, the reference to `i` is being mutated.

For tail recursion, it's a little different. If I did something like Factorial:

    function factorial(n, acc) {
        if (n <= 1) return acc;
        return factorial(n - 1, acc * n);
    }
    
Doing this, there's no explicit mutation. It might be happening behind the scenes, but everything there from the code perspective is creating a new value.

In something like Clojure, you might do something like

    (defn vrange [n]
      (loop [i 0 v []]
        (if (< i n)
          (recur (inc i) (conj v i))
          v)))
(stolen from [1])

In this case, we're creating "new" structures on every iteration of the loop, so our code is "more pure", in that there's no mutation. It might be being mutated in the implementation but not from the author's perspective.

I don't think I'm confusing anything.

[1] https://clojure.org/reference/transients

eru 5 days ago | parent | next [-]

You could imagine a language like eg Python with a for-each loop that creates a new variable for each run through the loop body.

Basically, just pretend the loop body is a lambda that you call for each run through the loop.

It might make sense to think of the common loops as a special kind of combinator (like 'map' and 'filter', 'reduce' etc.) And just like you shouldn't write everything in terms of 'reduce', even though you perhaps could, you shouldn't write everything in terms of the common loops either.

Make up new combinators, when you need them.

For comparison, in Haskell we seldom use 'naked' recursion directly: we typically define a combinator and then use it.

That often makes sense, even if you only use the combinator once.

ramchip 4 days ago | parent | prev | next [-]

> There, the reference to `i` is being mutated.

That's rebinding. Mutation is when you change the state of an object. Variables are not objects. You can't have a reference (aka pointer) pointing to a variable.

mrkeen 4 days ago | parent | next [-]

I don't know if you're referring to a particular language's peculiarities, but it doesn't really matter. It's mutation.

tombert's point (I think) is that in a functional setting, factorial(n) is always factorial(n). In an imperative setting, first it's 1, then 2, then 6, then 24, etc.

Here's factorial calculated imperatively in C#.

  public async Task Factorial() {
      long primitiveResult = 1L;
      Wrapped<long> objectResult = new Wrapped<long>(1L);

      var observer = new Task(ObserveVariable);
      observer.Start();

      for (var i = 2; i <= 15; i++) {
        primitiveResult *= i;
        objectResult = new Wrapped<long>(objectResult.Value * i);
        await Task.Delay(100);
      }

      observer.Wait();
      return;

      void ObserveVariable() {
        while (primitiveResult != 1307674368000 || objectResult.Value != 1307674368000) {
          Thread.Sleep(100);
        }
      }
  }
There are two results (one primitive and one object) to show that it doesn't matter. Maybe there's no such thing as a Reference-to-a-Variable in whatever language you have in mind, but in the above code ObserveVariable refers to a variable (while it mutates).
tombert 4 days ago | parent | prev | next [-]

I mean this is getting into splitting-hairs territory, but I would argue that rebinding is a form of mutation; the variable i is changing as far as the programmer is concerned.

If I were to write

    var i = new Obj(2)

    // do stuff 1
    
    i = new Obj(3)

    // do stuff 2
Then yes, that’s technically rebinding and not directly mutating the value in memory, but from “do stuff 2’s” perspective the value has changed. It still acts more or less the same as if it had directly changed.
1718627440 4 days ago | parent | prev [-]

> You can't have a reference (aka pointer) pointing to a variable.

What?

    int a;
    int * p = &a;

?
gpderetta 4 days ago | parent | prev [-]

  for(int i : some-container) {
     do-something-with(i);
  }
Where is the mutation?
mrkeen 4 days ago | parent [-]

It's implied by void-returning do-something-with(...).

Otherwise you're just burning cycles and a good compiler should dead-code eliminate the whole loop.

(see https://news.ycombinator.com/item?id=44873488)

gpderetta 4 days ago | parent [-]

We are discussing mutation in the loop itself, but sure:

  for(int x: container) {
      yield frob(x);
  }
mrkeen 5 days ago | parent | prev [-]

Nope. Loops are unnecessary unless you have mutation. If you don't mutate, there's no need to run the same code again: if the state of the world did not change during iteration 1, and you run the same code on it again, the state of the world won't change during iteration 2.

charcircuit 5 days ago | parent | prev [-]

>without creating any kind of mutable reference.

The parameter essentially becomes a mutable reference.

eru 5 days ago | parent | next [-]

No. There's no mutation happening.

Of course, a compiler might do whatever shenanigans it has to do. But that's an implementation detail and doesn't change how the language works.

(And let's not pretend that CPUs execute something that resembles an imperative language like C under the hood. That might have been true during the PDP11 or a VAX days. These days with out-of-order execution, pipelining etc what's actually happening doesn't really resemble one-after-another imperative code much.)

dgb23 5 days ago | parent [-]

I didn’t eat the sandwich, I just expressed its transformation into energy.

mrkeen 5 days ago | parent | prev | next [-]

If it does, then you would be able to express a race condition with it.

EDIT: I re-read parent comment.

>> the biggest advantage to using tail recursion over vanilla loops is the ability to keep using persistent data structures without any (apparent) mutation.

Yes

>> at least with Clojure's loop/recur stuff it is kind of cool to be able to keep using persistent data structures across iterations of loops

There's the implied mutation.

tombert 5 days ago | parent | prev [-]

No disagreement on that but that's an implementation detail and from the coder's perspective there's no mutable references.

charcircuit 5 days ago | parent | next [-]

From the coder's perspective it is. There's just different syntax for assigning to them.

tombert 5 days ago | parent [-]

No, from the coder's perspective it looks like I'm passing an argument into a function. I suppose you could say that arguments in general are "mutable references", and I guess that's not necessarily "untrue", but that's not generally how I visualize it.

If I pass a persistent structure into a tail call, it looks like I'm passing a copy into there, in the same way that I might if I did something like `Math.abs(myValue * 3)`; the compiler converts it to a mutable reference but I see it as the same as passing a value.

weitendorf 5 days ago | parent | prev [-]

From the coder's perspective, there are no mutable references only if the coder does not really rely on or care about the possibility that their code uses TCO. If they actively want TCO then they definitely care about the performance benefits they get from underlying mutability/memory reuse/frame elision.

tombert 5 days ago | parent [-]

Yeah but that's why I prefer Clojure's loop/recur thing. You're not allowed to have non-tail recursion there, so it's not implicit, so you really can pretend that there's no mutable references.

eru 5 days ago | parent | prev | next [-]

> Practically the day after I learned about tail recursion in CS class, I learned that almost all recursive calls can be translated to iteration, that in many cases the iterative version is easier to scan, is as fast if not faster, and that they can usually handle much much larger inputs than recursion due to avoiding stack overflow.

What do you mean by easier to scan? I find (most) loops [0] hard to read, because they typically involve mutable state.

When properly implemented, tail calls are as fast as gotos and loops, and don't blow up any stack. (Not all languages are implemented with a stack in any case.)

However you have one point:

Most of the time, we don't use recursion directly in our programs even in a language like Haskell or Scheme. Instead we define a 'combinator' that encapsulates the specific, limited recursion scheme that we want, and then use that one. This is very similar to how people generally don't use goto directly in their programs.

You might be familiar with the combinators 'map', 'filter' and perhaps 'reduce'/'foldr'. You could re-interpret the various common loop types as such recursion combinators that weaker languages have to bake into their language, because they are not strong enough to express them as a user-defined library. And indeed, Haskell has Control.Monad.Loops (https://hackage.haskell.org/package/monad-loops-0.4.3/docs/C...) which gives you exactly the common loop types as a library.

Some examples of less common combinators are eg 'unfold' or various combinators to traverse trees or graphs.

[0] The foreach-loop over some sequence is less headache inducing than eg a while-loop.

bjoli 5 days ago | parent | prev | next [-]

I am in the other camp. I prefer tail recursion and recursion over loops. However: For the simple cases it can and should probably be abstracted away like the racket for loops or my own goof-loop [1].

I just feel that a recursive calls makes state much more clear, but then again I am no fan of mutation in general. In my old python days I think a good 60% of my bugs were due to me being bad at managing state.

[1] https://rikspucko.koketteriet.se/bjoli/goof-loop

iamevn 5 days ago | parent [-]

I'm in the same boat, recursion tends to be easier for me to reason about because I'm expressing the problem in terms of some base case that incoming parameters are being reduced to rather than some condition that an iterative approach is working towards.

eru 5 days ago | parent [-]

I prefer recursion over loops. But even more I prefer abstracting away the recursion into combinators.

One of my favourite combinators is matrix multiplication. You define what 'addition' and 'multiplication' mean in your case, and all of a sudden you get an algorithm that computes shortest paths in a graph or does regex matching. See https://news.ycombinator.com/item?id=9751987

But for more bread and butter cases, there's 'map' over various data structures, and 'reduce' and traversals of trees etc.

aeonik 4 days ago | parent [-]

I'm forbidden from those links :(

eru 4 days ago | parent [-]

Oh, that's a shame. However, if you do a websearch for 'fun with semirings' you can find the relevant publication. Or you can use the wayback machine or so?

akdor1154 5 days ago | parent | prev | next [-]

There are cases where iteration is clearer, and there are cases where recursion is clearer.

It's well worth being familiar with both - if you learn how to shoehorn both approaches where they aren't ideal, your judgement on avoiding such practices will improve. :)

5 days ago | parent | prev | next [-]
[deleted]
zelphirkalt 5 days ago | parent | prev | next [-]

I have my doubts about any CS class/lecture, that teaches, that the "iterative version is easier to scan". Might just be the bias or inexperience of the lecturer. By not I find recursive to be often easier to read than some for loop with its loop header and counter that I need to think about and update in my mind. And then the loop usually in many languages does not even have a return value, because it is not an expression, but a statement. Meeehhh, very meehhh in many languages. Not all, but many.

I think maybe in languages like Ruby or Smalltalk a loop can be more readable, because of how they structure it as messages to objects, rather than special keywords in the language.

hinkley 5 days ago | parent [-]

> I have my doubts about any CS class/lecture, that teaches, that the "iterative version is easier to scan".

Well then you’re in luck because that was not an academic statement but a professional opinion - mine.

You can’t optimize the patterns you can’t see because they’re obscured by accidental complexity. And iteration is a frequent trick I use to surface deeper pattern s.

bigDinosaur 5 days ago | parent [-]

Things like DFS add a lot of noise in the way of seeing the pattern IMO, but then again if you want explicit stack management and that's the pattern you want to see I suppose the iterative versions are clearer.

hinkley 4 days ago | parent [-]

I think this is in the same space as 'imperative shell, functional core'. Because what ends up happening is that in a tree or DAG you keep swapping back and forth between decision and action, and once the graph hits about 100 nodes only people who are very clever and very invested still understand it.

A big win I mentioned elsewhere involved splitting the search from the action, which resulted in a more Dynamic Programming solution that avoided the need for cache and dealing with complex reentrancy issues. I'm sure there's another name for this but I've always just called this Plan and Execute. Plans are an easy 'teach a man to fish' situation. Sit at one breakpoint and determine if the bug is in the execution, the scan, or the input data. You don't need to involve me unless you think you found a bug or a missing feature. And because you have the full task list at the beginning you can also make decisions about parallelism that are mostly independent from the data structures involved.

It's not so much enabling things that were impossible as enabling things that were improbable. Nobody was ever going to fix that code in its previous state. Now I had, and there were options for people to take it further if they chose.

CalChris 5 days ago | parent | prev | next [-]

I must have missed this class. How does one convert a recursive descent parser into an iterative one?

Jtsummers 5 days ago | parent | next [-]

Every recursive algorithm can be made into an iterative algorithm if you use an explicit stack instead of the implicit call stack. It's not always cleaner.

In tail recursive algorithms, there is no stack, it's just a straight loop.

  def foo(state, acc): # if acc is needed
    if base-condition(state): return acc
    return foo(next(state), f(state, acc))
Is simply:

  def foo(state):
    acc = initial
    while not base-condition(state):
      acc = f(state, acc)
      state = next(state)
    return acc
If it's not tail recursive you introduce a stack, for instance a DFS on a binary tree:

  def search(node, val):
    if node is None: return False # empty tree, only need this check once
    stack = [node]
    while stack:
      n = stack.pop()
      if n.val == val: return True
      if n.right: stack.push(n.right)
      if n.left:  stack.push(n.left)
    return False
Note the insertion order is reversed from the recursive calls in a typical DFS. We want left to be searched first and then its children and then we "recur" back to right and deal with its children, so we need to push right into the stack first and then left.

When you have multiple mutually recursive functions (as is likely the case in a recursive descent parser) then things become more complicated, but it's still feasible.

LegionMammal978 5 days ago | parent | next [-]

Sometimes the messy translation into an explicit stack and dispatch loop is necessary, if you want to pause the calculation, serialize the current state, and reconstitute it later. (E.g., if you want to add disk-saved checkpoints, a frequent hassle in some of my projects.) Coroutines can get you the equivalent of pausing and resuming for a recursive routine, but I'm not aware of any language that lets you serialize the call stack.

Jtsummers 5 days ago | parent | next [-]

> I'm not aware of any language that lets you serialize a whole call stack.

That's basically what continuations provide. Scheme, SML, and others provide them.

LegionMammal978 5 days ago | parent [-]

Continuations allow an inactive call stack to sit around in memory. But do any of those languages let you save a continuation to a file and resume it in a different execution of the program, without massive contortions to the code? That's what I mean by serialization.

kmicinski 5 days ago | parent [-]

Seems potentially interesting to explore what would be required to store durable continuations. Feels very related to incrementalization and provenance, as you can see materializing a continuation to disk (whatever storage backend) requiring dependence tracking to do anything other than simply snapshotting the whole program state. I am just spitballing though, not sure if anyone has actually tried this.

tylerhou 5 days ago | parent | prev | next [-]

It is much easier and more maintainable to convert to continuation passing style. If you also use defunctionalization to allocate closures on a stack instead of a fresh heap allocation for every closure, you will achieve performance on par with an explicit stack. (In fact, defunctionalization is a mechanical transformation the produces exactly the data you would store in an explicit stack!)

Before I knew about CPS and defunctionalization, I wrote a Python decorator that did exactly the transformation you describe. https://github.com/tylerhou/fiber. Now I know about CPS and defunctionalization, I realize that my work was not the best implementation (but it was a fun learning experience!).

LegionMammal978 4 days ago | parent [-]

It's not just pausing and resuming that I'm looking for, but serialization into a form that can be saved to and loaded from files. (Another commenter calls this "materialization" of continuations.) In that regard, the problem with CPS is that continuations are totally opaque (or nearly so) in every language I'm aware of: you aren't able to do anything with them except resume them within the same execution of the program. Do any of your "defunctionalization" implementations offer such a feature?

tylerhou 4 days ago | parent [-]

Defunctionalization is a general technique that allows one to serialize functions as data. You’ve almost certainly used this technique before without realizing it was an instance of defunctionalization.

themk 5 days ago | parent | prev [-]

There is a library for Haskell that will do it. Though it doesn't support all GHC versions. It's very nifty if you need it.

https://github.com/jberthold/packman

spauldo 5 days ago | parent | prev | next [-]

If you have to use an explicit stack, you're still doing recursion. Useful if you're implementing the Ackermann function in FORTRAN 77, I suppose. Unless you're worried about running out of stack space, it's generally easier to reason about if you just use regular recursion.

That only applies for non-primitive recursive functions, though. Most people rarely encounter those. For primitive recursive functions, it's all down to the nature of the function and what's idiomatic for your language. If you're using Scheme, for example, recursion is the name of the game.

fellowniusmonk 5 days ago | parent | prev [-]

When TCO recursion was first developed it was very explicitly called out as a syntactic and structurally improved GOTO but still fundamentally a GOTO that could take params.

Recursion isn't physically real, any book that teaches the abstraction before explaining either the call stack (for non-TCO recursion) or in the GOTO context is a book actively trying to gatekeeper CS and weed out readers. (Not that any of those old pascal turbo/boreland books from the 90s actually shipped code that compiled.)

I had several people on HN of all places try to "teach me" recursion after this simple line inside a larger comment:

> It's like acting like recursion is physically real (it's not) when it smuggles in the call stack.

Recursion IS real abstractly. It's just not physically real, it was developed before we knew how DNA/RNA encoding handles the same issues in a more performant way.

SJC_Hacker 5 days ago | parent | next [-]

I don't see how it would be gatekeeping.

Recursive functions are a mathematical concept, like the "imaginary" number, or "trascendental" numbers. Or negative numbers for that matter.

Simple example, the Fibonacci sequence. FIB(1) = 1 FIB(2) = 1 FIB(N) = FIB(N-1) + FIB(N-2)

There's no programming language or "physical" implementation needed in order to calculate FIB(N) for arbitrary N. Pencil and paper will do for small numbers

fellowniusmonk 4 days ago | parent [-]

Yes, you cannot hide the callstack when taught with pencil and paper.

But in computer programing it is often hidden.

And then you are misleading people about recursion and not helping them to build mental models that map to the physical/hardware process.

This isn't a big deal or anything, it's hardly worth talking about but it is literally a true thing many don't realize and that does empirically negatively effect education of the concept.

SJC_Hacker 4 days ago | parent [-]

> Yes, you cannot hide the callstack when taught with pencil and paper.

Recursive functions were mathematically defined well before the callstack or the von Neumann architecture was a thing. Godel, Church and Kleene did alot of work on them in the 1930s (and I believe even prior to that infinite series were defined recursively although functional theory had not be worked out), this was before ENIAC (1945) which would be just barely recognizable as a general purpose computer and didn't have anything like CALL instruction.

So I don't understand this notion of "hiding the callstack" comes from. If one cannot understand recursion without invoking the callstack, well thats just them. But I don't see how its absolutely necessary or some universal

fellowniusmonk 4 days ago | parent [-]

I have no complaints about how recursion is taught in mathematics courses.

I'm being quite clear that I take issue with how the concept has been taught in many programming books and courses.

You can see the ignorance and confusion in this very thread.

Jtsummers 5 days ago | parent | prev | next [-]

> Recursion isn't physically real, any book that teaches the abstraction before explaining either the call stack (for non-TCO recursion) or in the GOTO context

Do you also believe that loops and functions should only be taught after the call stack and goto are taught? Neither of them are real either by your measure.

What a silly sentiment.

fellowniusmonk 5 days ago | parent [-]

Loops and functions can be physically represented as a stand alone, they can be physically carved onto a mechanical surface and observed.

They don't smuggle anything in conceptually, their abstraction doesn't leave anything critical to their structure out. They are real and can be physicalized as stand alone objects.

I see you've never tried to teach a software class to children or other learners, historically recursion is _very_ poorly taught by those who already understand the concept, but I'm not saying you have to care about that, a lot of people think there are too many programers already.

achierius 5 days ago | parent | prev | next [-]

> It's just not physically real, it was developed before we knew how DNA/RNA encoding handles the same issues in a more performant way.

That was a sharp left turn -- how do you figure DNA/RNA are relevant here? I feel like iteration pre-dates our modern understanding of RNA in particular (though I could be mistaken) so I struggle to imagine how DNA/RNA were particularly informative in this regard.

ndriscoll 4 days ago | parent | prev [-]

For basic structural recursion on finite data structures, all you're doing is case analysis and substitution (i.e. "plugging in" values). How is that gatekeeping?

Math majors cover hundreds of algorithms per semester often using recursion without thinking much of it. For that matter, same with "higher order functions" (e.g. derivative operators, or even just translation/scaling of a graph). Even in introductory calculus, students cover things like fixed points (e.g. I remember discussing d^4/(dx)^4 sin = sin almost immediately after introducing derivatives).

Thinking in terms of physical machines is useful for reasoning about performance, but seems like a distraction for learning to think logically/correctly, which seems like a more important first step?

kylec 5 days ago | parent | prev | next [-]

You can do it by managing your own stack, but I would argue that doing so makes the algorithm LESS easy to scan and has its own sets of problems.

electroly 5 days ago | parent | prev | next [-]

Same way as any other algorithm: with an explicit heap-allocated stack. I have a naive parser where I built operator precedence into the grammar as nested productions instead of using an algorithm like shunting yard. An input of ((((((1)))))) blew the stack. Converted it to an explicit stack and it was fine; deep for the call stack was not deep for my own heap-allocated stack. Nasty code, though--I think this serves as one of the counterexamples to the idea that the recursive code gets simpler when turning it into iteration. The original OP was talking more specifically about tail recursion, which a recursive descent parser won't be.

bjoli 5 days ago | parent [-]

There are many languages out there that can grow the stack these days. If you have a language that can do tail recursion, it is really a very neat complement. In lisps it means being able to write a list building function in a direct consing way, and still being faster than the TCP-way.

pmg101 5 days ago | parent | prev | next [-]

https://news.ycombinator.com/item?id=44837949

jandrese 5 days ago | parent | prev [-]

By making your own stack and using loops.

rr808 5 days ago | parent | prev | next [-]

> IME is a bigger foot gun than

Pretty much true for any functional feature. Great in the classroom, less practical in the real world.

SJC_Hacker 5 days ago | parent [-]

Trying to be as functional as possible I think has great benefits in the "real world". Functions are very easy to unit test, for one.

Pure functional is difficult, but breaking out the parts of the program which can be pure functional is generally a smart thing to do

If I had drive and ability the to make a programming language from scratch, it would be hybrid imperative/functional, and "pure" functions (no side effects, EXCEPT possibly debug logging) would be clearly marked as such.

alternatex 4 days ago | parent [-]

Pure functional isn't too difficult once you understand how to put everything that causes side effects on the side. You can write a domain layer in a purely functional manner and feed it data from non-pure sources. It's definitely a shift in thinking for people used to something like OOP, but once you make it, writing software functionally is not difficult. There's really not many ways to approach it, unlike OOP for which there are hundreds of books and approaches.

William_BB 4 days ago | parent [-]

I studied FP in college. I currently work on a large low latency C++ codebase. I honestly have no idea how I'd use pure functional concepts everywhere in this context. I'm also probably doing it wrong.

alternatex 4 days ago | parent [-]

I wouldn't be able to say without knowing the project, but just following the simple rule of not calling impure functions from pure functions will force your codebase in a state where all impure functions bubble up to the top layer of calling code. For web apps that's simple, it's just the entry point of an HTTP request. For desktop apps it might mean the event handlers for user actions.

Then you realize you are sort of forced to handle all exceptions from impure code at the top level, and after that all that remains is validated data running through pure functions that are easily testable.

At the top layer, your code will dip in and out of pure and impure functions. All possible exceptions coming from bad data and I/O issues are visible at that layer since those cannot reasonably occur in the pure code.

I admit it will be an odd sight at first, but the feeling of working with a domain safe from bad data makes it worth it.

javcasas 5 days ago | parent | prev | next [-]

Recursion deals with recursive data structures, and iteration deals with "plain old" data structures.

When you use one to process the other, you get into trouble. For example, you need to manage a stack to do iteration on your binary tree. Or you need to construct slices to recurse on your arrays.

It's all about ergonomics.

pdpi 5 days ago | parent | next [-]

> For example, you need to manage a stack to do iteration on your binary tree

Recursion around trees can get you into trouble with the stack. Consider:

    def traverse(node):
      do_work(node.payload)
      traverse(node.left)
      traverse(node.right)
The second recursive call to traverse is in tail position and is a candidate for elimination, but the first one isn't and will _always_ eat stack space. This is fine if you know what you're doing, but can bite you in the arse if you're expecting TCO to save you.
javcasas 4 days ago | parent [-]

TCO stands for "TAIL call optimization". It will obviously not save your arse if the call is not in the tail of the function, exactly the same as "python generators will not help you if you are doing C++".

eru 5 days ago | parent | prev | next [-]

What's a plain old data structure?

Linked lists are recursive, but you can use loops just fine to deal with them.

Similarly, it depends on what you do with your trees. If you want to go down a single path (eg to find an element in a search tree), loops aren't too bad.

And arrays don't necessarily ask for loops. Eg implementing quicksort or heap sort with loops is possible (and depending on your language, compiler and target machine might have performance benefits), but I'd hardly call it ergonomical.

Despite all my criticism you are hinting at a good point though: it's often useful to define your data structures together with combinators that work on them. So eg you want to define various tree traversals together with your tree. (Most people are familiar with the combinators 'map' and 'filter' for lists but that's a very small pool.)

Whether those combinators are implemented with recursion or loops is an implementation detail that the user of the combinator doesn't need to worry about too much.

dreamcompiler 5 days ago | parent | prev [-]

This is overly simplistic.

Is a binary tree recursive from the perspective of a type declaration? Yes.

Is it recursive from the point of view of search? Depends. If you're doing depth-first search, then you need a stack and a recursive algorithm seems natural. If you're doing breadth-first search, then you need a queue and the algorithm is less obviously recursive.

In either case the program could be expressed recursively or as an explicit loop. When a stack is needed, the underlying hardware always provides automatic stack management so recursion feels natural in that case.

When a queue is needed it's more of a tossup since hardware and programming languages don't generally provide automatic queue management, so you're going to need to manage that data structure manually anyway. In which case whether you choose to use recursion or not is more of a personal preference.

eru 5 days ago | parent | next [-]

> When a stack is needed, the underlying hardware always provides automatic stack management so recursion feels natural in that case.

Not true at all. Eg Risc-V provides no stack management at all. But compilers and interpreters exist, so it doesn't make a difference to how your high level code 'feels'.

javcasas 5 days ago | parent | prev [-]

So a tree is easier to do recursing vs iterating in some cases, whereas in other cases recursion and iteration have roughly the same level of complexity.

Kinda shows that recursive data structures are usually better dealt with recursion. Worst case you end up with the same level of difficulty as iteration.

busterarm 5 days ago | parent [-]

My career is practically built on the fact that other people 99% of the miss the simplest representation of the data/problem.

Just look at myriad ways that people implement something like checking win conditions in Rock, Paper, Scissors. Switch statements, conditional hell, etc.

...you can just do something as simple as stick Rock, Scissors, Paper into a circular list and compare to the next element. It's practically readable as plain English. No programming skill required.

If you need to code golf you can use modulo arithmetic, for a different kind of "simple", but then you lose readability.

eru 5 days ago | parent [-]

You could have a big pattern-match statement for Rock, Scissors, Paper and it would still be readable (in a language that supports these things like eg Rust).

The circular list seems a bit too cute for me. Sure, it's possible, but seems like overkill.

busterarm 5 days ago | parent [-]

If you have to do a bunch of list manipulation yourself sure but most scripting languages make these things trivial.

Heck, you could easily write it using Terraform built-in functions.

eru 4 days ago | parent [-]

What do you mean by 'list' in this case? Python, for example, has a list data structure, but it's different from what C folks would naturally think of when you say 'list'.

Spivak 5 days ago | parent | prev [-]

I'm in your camp, recursive code is hard for the next reader, which might be you, to understand. It's a bit too magic looking for my taste. If you're looping it should look like a loop, if you're pushing onto a stack you should get to see the stack. It's also safe against modifications which might silently break the tail-call nature of it until you blow out the stack later. It also gives you much saner and more debuggable stack traces because all the state is in the current frame.

busterarm 5 days ago | parent [-]

I never quite understood this recursion is hard/magic sentiment. Maybe it's because I started my CS education when it was still taught out of math departments or because it started with actually using programming languages to do algebra. Then again, the FP Kool-Aid is practically the blood in my veins at this point I guess.

hinkley 5 days ago | parent | next [-]

I’m good at spatial thinking, so on paper I should have zero issues with recursive code. But I also mentor and troubleshoot a lot and deep recursive code blows everyone’s mind. Even, it turns out, often the people who wrote it. Though they will swear otherwise.

Self recursive code takes on a fractal nature - at any call stack you cannot determine the call depth. You are in a twisty little maze.

When you layer calls horizontally, different tasks at different depths, it’s dead obvious where you are in the calculation. If all you can manage though is iteration, you still have the local variables to tell you where you are.

eru 5 days ago | parent | next [-]

I spend half my career with Haskell, OCaml, Erlang and we never had these problems with recursive code. (Or rather, we never had these problems and blamed it on recursion. We had to deal with plenty of bad code, of course.)

In contrast, I remember plenty of problems stemming from mutation (and lack of static typing etc).

hinkley 4 days ago | parent [-]

Survivor’s bias. You’re in a weird sect of self selected programmers.

busterarm 5 days ago | parent | prev [-]

I certainly understand your perspective and I've seen what you talk about it but I've just never run into problems with it personally...

and yeah, as others said, mutation is often the source of more problems.

Generally though I don't like hard "don't use X" rules. There's a time and place for everything -- it all comes down to what you're prioritizing at a given time. I try not to be dogmatic about anything.

hinkley 4 days ago | parent [-]

One of the big feats I accomplished on my last job was converting a Stygian nightmare of co-recursive code my “clever” coworker wrote into a tree scan and a single loop.

I fixed two bugs I knew about, four bugs nobody knew about, cut 20 seconds off of startup time per instance and 10 minutes off of deployment. And enough memory to run another process per box.

This is not even the first time on that job and it won’t be the last. In fact it was a theme. I landed half of the real improvements to response time and cluster size in about a quarter of the time investment, mostly by unraveling recursive bullshit inserted by two architectural astronauts. About ten percent was done by people following up on stuff I started. The rest was long slogs trying to do major architecture shifts while leaving the shape of the existing code alone.

To be clear: the iteration is not a super optimization weapon. It just clears the water enough that people can see what’s going on. This is to me the heart of “make the change easy”. You start refactoring for one problem and you find two more while you’re in there that were lost in the noise.

I’d say roughly half of the rest of my time was spent reworking code we ostensibly wrote for other teams that they balked at owning, in order to add features and convince them to take ownership of it. And almost half of that work was ironing out similar Rube Goldberg machines. How do you expect people to own code they can’t figure out how to maintain?

You won’t know if you haven’t tried, and “I haven’t seen this” is not a strong signal that you have. Keep an eye on the next coworker who talks like me and see what they’re up to.

javcasas 4 days ago | parent | prev [-]

I have seen this quite often. I blame it on the obsession with UML along with the limitations of UML. I/E in UML every one draws boxes that have arrows to other boxes, but no one draws boxes that have boxes inside them. Instead, you draw a box with an arrow going out and then back to itself, hence _it has to be a loop because the arrow does a loop_.

That's why React components are _weird and wrong_, SQL CTEs are _overly complicated_, and functional programming is _impossible to understand_. It's all boxes with other boxes inside them, instead of next to them, and many people can't understand you can nest boxes many times.