▲ | paulsutter 10 days ago | ||||||||||||||||
Generative models are bimodal - in certain tasks they are crazy terrible , and in certain tasks they are better than humans. The key is to recognize which is which. And much more important: - LLMs can suddenly become more competent when you give them the right tools, just like humans. Ever try to drive a nail without a hammer? - Models with spatial and physical awareness are coming and will dramatically broaden what’s possible It’s easy to get stuck on what LLMs are bad at. The art is to apply an LLMs strengths to your specific problem, often by augmenting the LLM with the right custom tools written in regular code | |||||||||||||||||
▲ | necovek 9 days ago | parent [-] | ||||||||||||||||
> Ever try to drive a nail without a hammer? I've driven a nail with a rock, a pair of pliers, a wrench, even with a concrete wall and who knows what else! I didn't need to be told if these can be used to drive a nail, and I looked at things available, looked for a flat surface on them and good grip, considered their hardness, and then simply used them. So if we only give them the "right" tools, they'll remain very limited by us not thinking about possible jobs they'll appear as if they know how to do and they don't. The problem is exactly that: they "pretend" to know how to drive a nail but not really. | |||||||||||||||||
|