Remix.run Logo
_mu 5 days ago

I haven't worked in OCaml but I have worked a bit in F# and found it to be a pleasant experience.

One thing I am wondering about in the age of LLMs is if we should all take a harder look at functional languages again. My thought is that if FP languages like OCaml / Haskell / etc. let us compress a lot of information into a small amount of text, then that's better for the context window.

Possibly we might be able to put much denser programs into the model and one-shot larger changes than is achievable in languages like Java / C# / Ruby / etc?

jappgar 5 days ago | parent | next [-]

That was my optimistic take before I started working on a large Haskell code base.

Aside from the obvious problem that there's not enough FP in the training corpus, it seems like terser languages don't work all that well with LLMs.

My guess is that verbosity actually helps the generation self-correct... if it predicts some "bad" tokens it can pivot more easily and still produce working code.

sshine 5 days ago | parent | next [-]

> terser languages don't work all that well with LLMs

I’d believe that, but I haven’t tried enough yet. It seems to be doing quite well with jq. I wonder how its APL fares.

When Claude generates Haskell code, I constantly want to reduce it. Doing that is a very mechanical process; I wonder if giving an agent a linter would give better results than overloading it all to the LLM.

gylterud 4 days ago | parent | next [-]

I usually treat the LLM generated Haskell code as a first draft.

The power of Haskell in this case is the fearless refactoring the strong type system enables. So even if the code generated is not beautiful, it can sit there and do a job until the surrounding parts have taken shape, and then be refactored into something nice when I have a moment to spare.

willhslade 4 days ago | parent | prev | next [-]

Apl is executed right to left and LLMS.... Aren't.

Vosporos 4 days ago | parent | prev [-]

Can't you just run HLint on it?

yawaramin 5 days ago | parent | prev | next [-]

There's actually a significant difference between Haskell and OCaml here so we can't lump them together. OCaml is a significantly simpler, and moderately more verbose, language than Haskell. That helps LLMs when they do codegen.

b_e_n_t_o_n 5 days ago | parent | prev [-]

This has been my experience as well. Ai writes Go better than any language besides maybe html and JavaScript/python.

byw 4 days ago | parent [-]

I wonder if it has more to do with larger training data than the languages themselves.

gf000 5 days ago | parent | prev | next [-]

My completely non-objective experiment of writing a simple CLI game in C++ and Haskell shows that the lines of code were indeed less in case of Haskell.. but the number of words were roughly the same, meaning the Haskell code just "wider" instead of "higher".

And then I didn't even make this "experiment" with Java or another managed, more imperative language which could have shed some weight due to not caring about manual memory management.

So not sure how much truth is in there - I think it differs based on the given program: some lend itself better for an imperative style, others prefer a more functional one.

QuadmasterXLII 4 days ago | parent [-]

My experience is that width is faster than height to type- mostly from lack of time spent indenting. This is _completely_ fixed by using a decent auto-formatter, but at least for me the bias towards width lingers on, because it took me years to notice that I needed an auto-formatter

gf000 4 days ago | parent [-]

May be faster to type - but does it matter? I have never ever been even close to being bottlenecked from typing speed. The only difference is that I "buffer" between lines or between different segments within a single line (but possibly both).

Buttons840 4 days ago | parent | prev | next [-]

If LLMs get a little better at writing code, we might want to use really powerful type systems and effect systems to limit what they can do and ensure it is correct.

For instance, dependent types allow us to say something like "this function will return a sorted list", or even "this function will return a valid Sudoku solution", and these things will be checked at compile time--again, at compile time.

Combine this with an effect system and we can suddenly say things like "this function will return a valid Sudoku solution, and it will not access the network or filesystem", and then you let the LLM run wild. You don't even have to review the LLM output, if it produces code that compiles, you know it works, and you know it doesn't access the network or filesystem.

Of course, if LLMs get a lot better, they can probably just do all this in Python just as well, but if they only get a little better, then we might want to build better deterministic systems around the unreliable LLMs to make them reliable.

gylterud 4 days ago | parent [-]

The day when LLMs generate useful code with dependent types! That would be awesome!

gylterud 4 days ago | parent | prev | next [-]

I have found that Haskell has two good things going for it when it comes to LLM code generation. Both have to do with correctness.

The expressive type system catches a lot of mistakes, and the fact that they are compile errors which can be fed right into the LLM again means that incorrect code is caught early.

The second is property based testing. With it I have had the LLM generate amazingly efficient, correct code, by iteratively making it more and more efficient – running quickcheck on each pass. The LLM is not super good at writing the tests, but if you add some yourself, you quickly root out any mistakes in the generated code.

akoboldfrying 4 days ago | parent [-]

Property-based testing is available in other languages. E.g., JS has fast-check, inspired by quickcheck.

gylterud 4 days ago | parent [-]

The way code is written in Haskell, small laser focused functions and clearly defined and mockable side effects, lends itself very well to property based testing.

This might not be impossible to achieve in other languages, but I haven’t seen it used as prevailently in other languages.

dkarl 5 days ago | parent | prev | next [-]

In Scala, I've had excellent luck using LLMs to speed up development when I'm using cats-effect, an effects library.

My experience in the past with something like cats-effect has been that there are straightforward things that aren't obvious, and if you haven't been using it recently, and maybe even if you've been using it but haven't solved a similar problem recently, you can get stuck trawling through the docs squinting at type signatures looking for what turns out to be, in hindsight, an elegant and simple solution. LLMs have vastly reduced this kind of friction. I just ask, "In cats-effect, how do I...?" and 80% of the time the answer gets me immediately unstuck. The other 20% of the time I provide clarifying context or ask a different LLM.

I haven't done enough maintenance coding yet to know if this will radically shift my view of the cost/benefit of functional programming with effects, but I'm very excited. Writing cats-effect code has always been satisfying and frustrating in equal measure, and so far, I'm getting the confidence and correctness with a fraction of the frustration.

I haven't unleashed Claude Code on any cats-effect code yet. I'm curious to see how well it will do.

omcnoe 4 days ago | parent | prev | next [-]

I think that functional languages do actually have some advantages when it comes to LLM's, but not due to terseness.

Rather, immutability/purity is a huge advantage because it plays better with the small context window of LLM's. An LLM then doesn't have to worry about side effects or mutable references to data outside the scope currently being considered.

sshine 5 days ago | parent | prev | next [-]

> My thought is that if FP languages like OCaml / Haskell / etc. let us compress a lot of information into a small amount of text, then that's better for the context window.

Claude Code’s Haskell style is very verbose; if-then-elsey, lots of nested case-ofs, do-blocks at multiple levels of intension, very little naming things at top-level.

Given a sample of a simple API client, and a request to do the same but for another API, it did very well.

I concluded that I just have more opinions about Haskell than Java or Rust. If it doesn’t look nice, why even bother with Haskell.

I reckon that you could seed it with style examples that take up very little context space. Also, remind it to not enable language pragmas per file when they’re already in .cabal, and similar.

esafak 5 days ago | parent | prev | next [-]

I think LLMs benefit from training examples, static typing, and an LSP implementation more than terseness.

nextos 5 days ago | parent [-]

Exactly. My experience building a system that generates Dafny and Liquid Haskell is that you can get much further than with a language that is limited to dynamic or simple static types.

nukifw 5 days ago | parent | prev | next [-]

To be completely honest, I currently only use LLMs to assist me in writing documentation (and translating articles), but I know that other people are looking into it: https://anil.recoil.org/wiki?t=%23projects

d4mi3n 5 days ago | parent | prev | next [-]

I think this is putting the cart before the horse. Programs are generally harder to read than they are to write, so optimizing for concise output to benefit the tool at the potential expense of the human isn't a trade I'd personally make.

Granted, this may just be an argument for being more comfortable reading/writing code in a particular style, but even without the advantages of LLMs adoption of functional paradigms and tools has been a struggle.

seprov 5 days ago | parent | prev [-]

Procedures can be much more concise in functional/ML syntax, but many things are not -- dependency injection in languages like C# for example are able to be much less verbose because of really excellent DI libraries and (arguably more sane) instance constructor syntax.