| ▲ | eru 4 hours ago | |||||||||||||||||||||||||
I'm inclined to agree with you in principle, but there's much, much less Haskell examples in their training corpus than for JavaScript or Python. | ||||||||||||||||||||||||||
| ▲ | tikhonj 14 minutes ago | parent | next [-] | |||||||||||||||||||||||||
From what I've heard—and in my own very limited experiments—LLMs are much better at less popular languages than I would have expected. I've had good results with OCaml, and I've talked to people who've had good results with Haskell and even Unison. I've also seen multiple startups that have had some pretty impressive performance with Lean and Rocq. My current theory is that as long as the LLM has sufficiently good baseline performance in a language, the kind of scaffolding and tooling you can build around the pure code generation will have an outsize effect, and languages with expressive type systems have a pretty direct advantage there: types can constrain and give immediate feedback to your system, letting you iterate the LLM generation faster and at a higher level than you could otherwise. I recently saw a paper[1] about using types to directly constrain LLM output. The paper used TypeScript, but it seems like the same approach would work well with other typed languages as well. Approaches like that make generating typed code with LLMs even more promising. Abstract: > Language models (LMs) can generate code but cannot guarantee its correctness often producing outputs that violate type safety, program invariants, or other semantic properties. Constrained decoding offers a solution by restricting generation to only produce programs that satisfy user-defined properties. However, existing methods are either limited to syntactic constraints or rely on brittle, ad hoc encodings of semantic properties over token sequences rather than program structure. > We present ChopChop, the first programmable framework for constraining the output of LMs with respect to semantic properties. ChopChop introduces a principled way to construct constrained decoders based on analyzing the space of programs a prefix represents. It formulates this analysis as a realizability problem which is solved via coinduction, connecting token-level generation with structural reasoning over programs. We demonstrate ChopChop's generality by using it to enforce (1) equivalence to a reference program and (2) type safety. Across a range of models and tasks, ChopChop improves success rates while maintaining practical decoding latency. | ||||||||||||||||||||||||||
| ▲ | solomonb 4 hours ago | parent | prev | next [-] | |||||||||||||||||||||||||
You are right that there is significantly more Javascript in the training data, but I can say from experience that I'm a little shocked at how well opus 4.5 has been for me writing Haskell. I'm fairly particular and I end up re-writing a lot of code for style reasons but it can often one shot an acceptable solution that is mostly inline with the rest of the code base. | ||||||||||||||||||||||||||
| ▲ | joelthelion an hour ago | parent | prev | next [-] | |||||||||||||||||||||||||
For the little Haskell I've done with llms, I can tell you they're not bad at it. Actually, Haskell was a bit too hard for me on my own for real projects. Now with AI assistants, I think it could be a great pick. | ||||||||||||||||||||||||||
| ▲ | energy123 3 hours ago | parent | prev | next [-] | |||||||||||||||||||||||||
True for now, but probably not a durable fact. Synthetic data pipelines should be mostly invariant to the programming language, as long as the output is correct. If anything the additional static analysis makes it more amenable to synthetic data generation. | ||||||||||||||||||||||||||
| ||||||||||||||||||||||||||
| ▲ | kstrauser 4 hours ago | parent | prev [-] | |||||||||||||||||||||||||
And yet the models I've used have been great with Rust, which pales in lines of code to JavaScript (or Python or PHP or Perl or C or C++). | ||||||||||||||||||||||||||
| ||||||||||||||||||||||||||