| ▲ | Mathnerd314 4 hours ago | |
I get that this is essentially vibe coding a language, but it still seems lazy to me. He just asked the language model zero-shot to design a language unprompted. You could at least use the Rosetta code examples and ask it to identify design patterns for a new language. | ||
| ▲ | forgotpwd16 30 minutes ago | parent | next [-] | |
There's also the issue, which is also noted by the author, that LLM-optimization quite often becomes, when shouldn't be just that, token-minimization. | ||
| ▲ | Snacklive 3 hours ago | parent | prev [-] | |
I was thinking the same. Maybe if he tried to think instead of just asking the model. The premise is interesting "We optimize languages for humans, maybe we can do something similar for llms". But then he just ask the model to do the thing instead of thinking about the problem, maybe instead of prompting "Hey made this" a more granular, guided approach could've been better. For me this is just a lost of potential on the topic, and an interesting read made boring pretty fast. | ||