Remix.run Logo
fxj 4 hours ago

One thing to consider:

The (well-known) Sapir–Whorf hypothesis (if dont know it, look it uop) is often invoked for natural languages, but there’s a pretty direct analogue for programming languages: the language you "think in" during solving a problem biases which abstractions and idioms you reach for first.

If you force an LLM to first solve a problem in a highly abstract language (Lisp, APL, Prolog) and only then later translate that solution to C++ or Rust, you’re effectively changing the intermediate representation the model works in. That IR has very different "affordance", e.g.

- Lisp pushes you toward recursive tree/list processing, higher‑order functions and macro‑like decomposition. (some nice web frameworks were initially written in LISP, scheme, etc...)

- APL pushes you toward whole‑array transforms, point‑free pipelines and exploiting data parallelism. (banks are still using it because of perforance)

- Prolog pushes you toward facts/rules, constraint satisfaction, and backtracking search. (it is a very high abstraction but might suit LLMs very well)

OK, and when you then translate that program into C++/Rust/python, a lot of this bias leaks through. You often end up with:

Rule engines, constraint solvers, or table‑driven dispatch code when the starting point was Prolog.

Iterator/functor pipelines and EDSL‑like combinators when the starting point was Lisp.

Data‑parallel kernels and "vectorized" loops when the starting point was APL.

In principle, an LLM could generate those idioms directly in C++/Rust. In practice, however, models are heavily shaped by their training distribution and default prompts. If you just say "write in Rust", they tend to regress towards the most common patterns in the corpus (framework‑heavy, imperative, not very aggressively functional or data‑parallel), even when the language would support richer abstractions.

By inserting a "thinking" step in a different paradigm, you bias the search over solution space before you ever get to Rust/C++. That doesn’t magically make the code better, but it does change which regions of the design space the model explores.

Same would also be true for python which is already a multi-idiomatic language. So it might be a good idea to learn a portfolio of different languages and then try to tackle a problem with a specific language instead of automatically using python/go/rust because of performance.

Something to consider...

p.s. how would a problem be solved when the LLM would have to write it first in erlang? Is it the automatically distributed?

p.p.s. the "design pattern" of the GoF comes automatically to my mind, which might be a good hint to the LLM to use.