| ▲ | tl2do 2 hours ago | |||||||
Inspired by this article, I tried to read some tutorials on Forth. My question is whether concatenative languages are AI-coding friendly. Apart from the training data availability, the question is also whether LLMs can correctly understand long flows of concatenated operations. Any ideas? | ||||||||
| ▲ | crq-yml an hour ago | parent | next [-] | |||||||
They can produce idioms that resemble the flow of Forth code but when asked to produce a working algorithm, they get lost very quickly because there's a combination of reading "backwards" (push order) and forwards (execution order) needed to maintain context. At any time a real Forth program may inject a word into the stack flow that completely alters the meaning of following words, so reading and debugging Forth are nearly the same thing - you have to walk through the execution step by step unless you've intentionally made patterns that will decouple context - and when you do, you've also entered into developing syntax and the LLM won't have training data on that. I suggest using Rosetta Code as a learning resource for Forth idioms. | ||||||||
| ▲ | alexwennerberg an hour ago | parent | prev | next [-] | |||||||
I've experimented with it and after some initial skepticism, I've been impressed -- because a simple Forth is implemented relatively concisely, it can get a "full" picture of the language by reading the implementation. The issue I've encountered is that it tends to treat forth as a single specified programming language, using non-existent words in some specific forth. With coaxing, it can correct this, though. I may write something up, but I am interested if anyone else has experimented with this -- culturally, Forth programmers tend to be very anti-AI (and I feel a bit 'icky' about my experiments) | ||||||||
| ▲ | adastra22 an hour ago | parent | prev [-] | |||||||
Any concatenative program can be reduced to a rho type, and AI are pretty good about combining properly typed abstractions. | ||||||||
| ||||||||