| ▲ | skydhash a day ago | |||||||
I don’t because I trust the process to get the artifacts. Why? Because it’s easy to replicate and verify. Just like how proof works in math. You can’t verify LLM’s output. And thus, any form of trust is faith, not rational logic. | ||||||||
| ▲ | josephg 13 hours ago | parent | next [-] | |||||||
> You can’t verify LLM’s output. And thus, any form of trust is faith, not rational logic. Well, you can verify an LLM's output all sorts of ways. But even if you couldn't, its still very rational to be judicious with how you use your time and attention. If I spent a few hours going through the ffmpeg documentation I could probably learn it better than chatgpt. But, its a judgement call whether its better to spend 5 minutes getting chatgpt to generate an ffmpeg command (with some error rate) or spend 2 hours doing it myself (with maybe a lower error rate). Which is a better use of my time depends on lots of factors. How much I care. How important it is. How often that knowledge will be useful in the future. And so on. If I worked in a hollywood production studio, I'd probably spend the 2 hours (and many more). But if I just reach for ffmpeg once a year, the small% error rate from chatgpt's invocations might be fine. Your time and attention are incredibly limited resources. Its very rational to spend them sparingly. | ||||||||
| ▲ | ben_w a day ago | parent | prev [-] | |||||||
I don't install 3rd party dependencies if I can avoid them. Why? Because although someone could have verified them, there's no guarantee that anybody actually did, and this difference has been exploited by attackers often enough to get its own name, a "supply-chain attack". With an LLM’s output, it is short enough that I can* put in the effort to make sure it's not obliviously malicious. Then I save the output as an artefact. * and I do put in this effort, unless I'm deliberately experimenting with vibe coding to see what the SOTA is. | ||||||||
| ||||||||