▲ | jappgar 5 days ago | |||||||||||||||||||
That was my optimistic take before I started working on a large Haskell code base. Aside from the obvious problem that there's not enough FP in the training corpus, it seems like terser languages don't work all that well with LLMs. My guess is that verbosity actually helps the generation self-correct... if it predicts some "bad" tokens it can pivot more easily and still produce working code. | ||||||||||||||||||||
▲ | sshine 5 days ago | parent | next [-] | |||||||||||||||||||
> terser languages don't work all that well with LLMs I’d believe that, but I haven’t tried enough yet. It seems to be doing quite well with jq. I wonder how its APL fares. When Claude generates Haskell code, I constantly want to reduce it. Doing that is a very mechanical process; I wonder if giving an agent a linter would give better results than overloading it all to the LLM. | ||||||||||||||||||||
| ||||||||||||||||||||
▲ | yawaramin 5 days ago | parent | prev | next [-] | |||||||||||||||||||
There's actually a significant difference between Haskell and OCaml here so we can't lump them together. OCaml is a significantly simpler, and moderately more verbose, language than Haskell. That helps LLMs when they do codegen. | ||||||||||||||||||||
▲ | b_e_n_t_o_n 5 days ago | parent | prev [-] | |||||||||||||||||||
This has been my experience as well. Ai writes Go better than any language besides maybe html and JavaScript/python. | ||||||||||||||||||||
|