| ▲ | rvz 4 hours ago | |
> Humans don't have to read or write or undestand it. The goal is to let an LLM express its intent as token-efficiently as possible. Maybe in the future, humans don't have to verify the spelling, logic or grounding truth either in programs because we all have to give up and assume that the LLM knows everything. /s Sometimes, I read these blogs from vibe-coders that have become completely complacent with LLM slop, I have to continue to remind others why regulations exist. Imagine if LLMs should become fully autonomous pilots on commercial planes or planes optimized for AI control and the humans just board the plane and fly for the vibes, maybe call it "Vibe Airlines". Why didn't anyone think of that great idea? Also completely remove the human from the loop as well? Good idea isn't it? | ||
| ▲ | eadwu 4 hours ago | parent [-] | |
There are multiple layers and implicit perspectives that I think most are purposefully omitting as a play for engagement or something else. The reason why LLMs are still restricted to higher level programming languages is because there are no guarantees of correctness - any guarantee needs to be provided by a human - and it is already difficult for humans to review other human's code. If there comes a time where LLMs can generate code - whether some term slop or not - that has a guarantee of correctness - it is indeed probably a correct move to probably have a more token-efficient language, or at least a different abstraction compared to the programming abstractions of humans. Personally, I think in the coming years there will be a subset of programming that LLMs can probably perform while providing a guarantee of correctness - likely using other tools, such as Lean. I believe this capability can be stated as - LLMs should be able to obfuscate any program code - which is pretty decent guarantee. | ||