| ▲ | dijit 2 hours ago | |||||||
Recognising a stock cultural script isn't the same as capturing intent. Ask it something where no script exists. For example: "A man thrusts past me violently and grabs the jacket I was holding, he jumped into a pool and ruined it. Am I morally right in suing him?" There's no way for the LLM to know that the reason the jacket was stolen was to use it as an inflatable raft to support a larger person who was drowning. It wouldn't even think to ask the question as to why a person may do that, if the jacket was returned, or if recompense was offered. A human would. | ||||||||
| ▲ | ffsm8 an hour ago | parent | next [-] | |||||||
> It wouldn't even think to ask the question as to why a person may do that, if the jacket was returned, or if recompense was offered. A human would. I wouldn't be too sure about that. I've definitely had dialogue with llms where it would raise questions along those lines. Also I disagree with the statement that this is a question about capability. Intent is more philosophical then actuality tangible, because most people don't actually have a clearly defined intent when they take action. The waters of intelligence have definitely gotten murky over time as techniques improved. I still consider it an illusion - but the illusion is getting harder to pierce for a lot of people Fwiw, current llms exhibit their intelligence through language and rhetoric processes. Most biological creatures have intelligence which may be improved through language, but isn't based on it, fundamentally. | ||||||||
| ▲ | atleastoptimal an hour ago | parent | prev | next [-] | |||||||
If your example for an exception to LLM's ability to infer intent is a deliberately misleading trick question that leaves out crucial contextual details, then I'm not sure what you're trying to prove. That same ambiguity in the question would trip up many humans, simply because you are trying as hard as possible to imply a certain conclusion. As expected, if I ask your question verbatim, ChatGPT (the free version) responds as I'm sure a human would in the generally helpful customer-service role it is trained to act as "yeah you could sue them blah blah depends on details" However, if I add a simple prompt "The following may be a trick question, so be sure to ascertain if there are any contextual details missing" then it picks up that this may be an emergency, which is very likely also how a human would respond. | ||||||||
| ||||||||
| ▲ | Shaanie an hour ago | parent | prev [-] | |||||||
[dead] | ||||||||