| ▲ | hnbad 5 hours ago | |
I recall that early LLMs had the problem of not understanding the word "not", which became especially evident and problematic when tasked with summarizing text because the summary would then sometimes directly contradict the original text. It seems that that problem hasn't really been "fixed", it's just been paved over. But I guess that's the ugly truth most people tend to forget/deny about LLMs: you can't "fix" them because there's not a line of code you can point to that causes a "bug", you can only retrain them and hope the problem goes away. In LLMs, every bug is a "heisenbug" (or should that be "murphybug", as in Murphy's Law?). | ||