| ▲ | Jooror 3 hours ago | |||||||
I wonder if my intuition here is correct; I would posit that “PL implementation” is a far more popular and well-explored field than it seems. How many toy/small/labor-of-love langs make it to Show HN? How many more simply don’t? I’ve never personally caught the language implementation bug. I appreciate your perspective here. | ||||||||
| ▲ | 3371 3 hours ago | parent [-] | |||||||
I totally agree, and I was fully aware of how common people make language for fun when I replied. But I feel like the rationale would still stands: Considering LLMs' natures, common boilerplate tasks are easy because they can kind of just "decompress" from training data. But for a new language design, unless the language is almost identical to some other captured by the model, "decompression" would just fail. | ||||||||
| ||||||||