▲ | vidarh 6 days ago | ||||||||||||||||||||||||||||||||||||||||||||||||||||
We double check human work too in all kinds of contexts. A whole lot of my schooling involved listening to teachers repeating over and over to us how we should check our work, because we can't even trust ourselves. (heck, I had to double-check and fix typos in this comment) | |||||||||||||||||||||||||||||||||||||||||||||||||||||
▲ | ModernMech 6 days ago | parent [-] | ||||||||||||||||||||||||||||||||||||||||||||||||||||
It’s very weird to me when someone says “this tool does not have this common property good tools have” and someone replies “humans also do not have those properties!” As if that is responsive to a complaint about a tool lacking a common property of tools. We use tools because they work in ways that humans do not. Through centuries of building and using tools, we as a society have learned what makes a tool good versus bad. Good tools are reliable. They have a clear purpose and ergonomic user interface. They are straightforward to use and transparent in how they operate. LLMs are none of these things. It doesn’t matter that humans also are none of these things, if we are trying to use LLMs as tools. The closest human invention resembling an LLM is the idea of a bureaucracy — LLMs are not good tools, they are not good humans, they are mindless automatons that stand in the way and lead you astray. At best, LLMs are poor tools and also poor human replacements, which is why it’s so frustrating to me we are so intent on replacing good tools and humans with LLMs. | |||||||||||||||||||||||||||||||||||||||||||||||||||||
|