▲ | mfalcon 3 days ago | |||||||
I think that the natural language understanding capability of current LLMs is undervalued. To understand what the user meant before LLM's we had to train several NLP+ML models in order to get something going but in my experience we'll never get close to what LLM's do now. I remember the first time I tried ChatGPT and I was surprised by how well it understood every input. | ||||||||
▲ | Zigurd 3 days ago | parent [-] | |||||||
It's parsing. It's tokenizing. But it's a stretch to call it understanding. It creates a pattern that it can use to compose a response. Ensuring the response is factual is not fundamental to LLM algorithms. In other words, it's not thinking. The fact that it can simulate a conversation between thinking humans without thinking is remarkable. It should tell us something about the facility for language. But it's not understanding or thinking. | ||||||||
|