| ▲ | noosphr 7 hours ago | |
LLMs are _still_ terrible at deriving even the simplest of logical
entailment. I've had the latest and greatest Claude and GPT derive 'B
instead of '(not B) from '(and A (not B)) when 'A and 'B are anything
but the simplest of English sentences.I shudder to think what they decide the correct interpretations of a spec written in prose is. | ||
| ▲ | layer8 24 minutes ago | parent | next [-] | |
Lisp quotes are confusing in prose. | ||
| ▲ | Kiro 5 hours ago | parent | prev | next [-] | |
I would love to see a prompt where it fails such a thing. Do you have an example? | ||
| ▲ | 0x696C6961 6 hours ago | parent | prev [-] | |
Still better than my coworkers ... | ||