| ▲ | jaggederest 2 hours ago | |||||||
> We're not building a language for LLMs just yet. Working on it, actually! I think it's a really interesting problem space - being efficient on tokens, readable by humans for review, strongly typed and static for reasoning purposes, and having extremely regular syntax. One of the biggest issues with symbols is that, to a human, matching parentheses is relatively easy, but the models struggle with it. I expect a language like the one I'm playing with will mature enough over the next couple years that models with a knowledge cutoff around 1/2027 will probably know how to program it well enough for it to start being more viable. One of the things I plan to do is build evals so that I can validate the performance of various models on my as yet only partially baked language. I'm also using only LLMs to build out the entire infrastructure, mostly to see if it's possible. | ||||||||
| ▲ | quinnjh 2 hours ago | parent [-] | |||||||
do you expect the model to train on synthetic data or do you expect to grow a userbase that will generate organic training data? > One of the biggest issues with symbols is that, to a human, matching parentheses is relatively easy, but the models struggle with it. Great point. I find it near trivial to close parens but llms seem to struggle with the lisps ive played with because of this counting issue. To the point where ive not been working with them as much. typescript and functional js as other commentors note is usually smooth sailing. | ||||||||
| ||||||||