▲ | ModernMech 6 days ago | |||||||||||||||||||||||||||||||||||||||||||
It’s very weird to me when someone says “this tool does not have this common property good tools have” and someone replies “humans also do not have those properties!” As if that is responsive to a complaint about a tool lacking a common property of tools. We use tools because they work in ways that humans do not. Through centuries of building and using tools, we as a society have learned what makes a tool good versus bad. Good tools are reliable. They have a clear purpose and ergonomic user interface. They are straightforward to use and transparent in how they operate. LLMs are none of these things. It doesn’t matter that humans also are none of these things, if we are trying to use LLMs as tools. The closest human invention resembling an LLM is the idea of a bureaucracy — LLMs are not good tools, they are not good humans, they are mindless automatons that stand in the way and lead you astray. At best, LLMs are poor tools and also poor human replacements, which is why it’s so frustrating to me we are so intent on replacing good tools and humans with LLMs. | ||||||||||||||||||||||||||||||||||||||||||||
▲ | vidarh 6 days ago | parent [-] | |||||||||||||||||||||||||||||||||||||||||||
The reason for making that observation is that we don't have any other comparable tools and it is more reasonable to benchmark LLMs against what humans are capable of, because whether or not they are good approximations, we are trying to model human abilities. One day maybe we exceed human abilities, but it's unreasonable to expect early attempts - and they are still early attempts - to do things we don't know how to beat other than by putting all kinds of complex process on top of very flawed human thinking. | ||||||||||||||||||||||||||||||||||||||||||||
|