Remix.run Logo
jappgar 5 days ago

That was my optimistic take before I started working on a large Haskell code base.

Aside from the obvious problem that there's not enough FP in the training corpus, it seems like terser languages don't work all that well with LLMs.

My guess is that verbosity actually helps the generation self-correct... if it predicts some "bad" tokens it can pivot more easily and still produce working code.

sshine 5 days ago | parent | next [-]

> terser languages don't work all that well with LLMs

I’d believe that, but I haven’t tried enough yet. It seems to be doing quite well with jq. I wonder how its APL fares.

When Claude generates Haskell code, I constantly want to reduce it. Doing that is a very mechanical process; I wonder if giving an agent a linter would give better results than overloading it all to the LLM.

gylterud 4 days ago | parent | next [-]

I usually treat the LLM generated Haskell code as a first draft.

The power of Haskell in this case is the fearless refactoring the strong type system enables. So even if the code generated is not beautiful, it can sit there and do a job until the surrounding parts have taken shape, and then be refactored into something nice when I have a moment to spare.

willhslade 4 days ago | parent | prev | next [-]

Apl is executed right to left and LLMS.... Aren't.

Vosporos 4 days ago | parent | prev [-]

Can't you just run HLint on it?

yawaramin 5 days ago | parent | prev | next [-]

There's actually a significant difference between Haskell and OCaml here so we can't lump them together. OCaml is a significantly simpler, and moderately more verbose, language than Haskell. That helps LLMs when they do codegen.

b_e_n_t_o_n 5 days ago | parent | prev [-]

This has been my experience as well. Ai writes Go better than any language besides maybe html and JavaScript/python.

byw 4 days ago | parent [-]

I wonder if it has more to do with larger training data than the languages themselves.