| ▲ | forgotpwd16 2 hours ago | |
>It does lend itself better to "tokenization" of a sort - if you want to construct operations from lots of smaller operations [...] That's an educated assumption to make. But therein lies the issue with every LLM "optimized" language, including those recent ones posted here oriented toward minimizing tokens. Assumptions, that are unvalidatable and unfalsifiable, about the kind of output LLMs synthesize/emit when that output is code (or any output to be real). | ||