| ▲ | Show HN: An LLM-optimized programming language(github.com) | |||||||
| 22 points by ImJasonH 4 hours ago | 8 comments | ||||||||
| ▲ | cpeterso 3 minutes ago | parent | next [-] | |||||||
[delayed] | ||||||||
| ▲ | kburman 6 minutes ago | parent | prev | next [-] | |||||||
An LLM is optimized for its training data, not for newly built formats or abstractions. I don’t understand why we keep building so-called "LLM-optimized" X or Y. It’s the same story we’ve seen before with TOON. | ||||||||
| ▲ | discrisknbisque 3 hours ago | parent | prev | next [-] | |||||||
The Validation Locality piece is very interesting and really got my brain going. Would be cool to denote test conditions in line with definitions. Would get gross for a human, but could work for an LLM with consistent delimiters. Something like (pseudo code): ``` fn foo(name::"Bob"|genName(2)): if len(name) < 3 Err("Name too short!")
```Right off the bat I don't like that it relies on accurately remembering list indexes to keep track of tests (something you brought up), but it was fun to think about this and I'll continue to do so. To avoid the counting issue you could provide tools like "runTest(number)", "getTotalTests", etc. One issue: The Loom spec link is broken. | ||||||||
| ▲ | Mathnerd314 3 hours ago | parent | prev | next [-] | |||||||
I get that this is essentially vibe coding a language, but it still seems lazy to me. He just asked the language model zero-shot to design a language unprompted. You could at least use the Rosetta code examples and ask it to identify design patterns for a new language. | ||||||||
| ||||||||
| ▲ | petesergeant 3 hours ago | parent | prev | next [-] | |||||||
A language is LLM-optimized if there’s a huge amount of high-quality prior art, and if the language tooling itself can help the LLM iterate and catch errors | ||||||||
| ▲ | rvz 2 hours ago | parent | prev [-] | |||||||
> Humans don't have to read or write or undestand it. The goal is to let an LLM express its intent as token-efficiently as possible. Maybe in the future, humans don't have to verify the spelling, logic or grounding truth either in programs because we all have to give up and assume that the LLM knows everything. /s Sometimes, I read these blogs from vibe-coders that have become completely complacent with LLM slop, I have to continue to remind others why regulations exist. Imagine if LLMs should become fully autonomous pilots on commercial planes or planes optimized for AI control and the humans just board the plane and fly for the vibes, maybe call it "Vibe Airlines". Why didn't anyone think of that great idea? Also completely remove the human from the loop as well? Good idea isn't it? | ||||||||
| ||||||||