▲ | jhanschoo a day ago | ||||||||||||||||||||||
Let's make this more concrete than talking about "understanding knowledge". Oftentimes I want to know something that cannot feasibly be arrived at by reasoning, only empirically. Remaining within the language domain, LLMs get so much more useful when they can search the web for news, or your codebase to know how it is organized. Similarly, you need a robot that can interact with the world and reason from newly collected empirical data in order to answer these empirical questions, if the work had not already been done previously. | |||||||||||||||||||||||
▲ | skydhash a day ago | parent | next [-] | ||||||||||||||||||||||
> LLMs get so much more useful when they can search the web for news, or your codebase to know how it is organized But their usefulness is only surface-deep. The news that matters to you is always deeply contextual, it's not only things labelled as breaking news or happening near you. Same thing happens with code organization. The reason is more human nature (how we think and learn) than machine optimization (the compiler usually don't care). | |||||||||||||||||||||||
▲ | awesome_dude a day ago | parent | prev [-] | ||||||||||||||||||||||
I know the attributes of an Apple, i know the attributes of a Pear. As does a computer. But only i can bite into one and know without any doubt what it is and how it feels emotionally. | |||||||||||||||||||||||
|