| ▲ | irthomasthomas 4 hours ago | |||||||
Isn't this proof that LLMs still don't really generalize beyond their training data? | ||||||||
| ▲ | adastra22 2 hours ago | parent | next [-] | |||||||
LLMs are very good at generalizing beyond their training (or context) data. Normally when they do this we call it hallucination. Only now we do A LOT of reinforcement learning afterwards to severely punish this behavior for subjective eternities. Then act surprised when the resulting models are hesitant to venture outside their training data. | ||||||||
| ▲ | Zambyte 2 hours ago | parent | prev | next [-] | |||||||
I wonder how they would behave given a system prompt that asserts "dogs may have more or less than four legs". | ||||||||
| ||||||||
| ▲ | CamperBob2 3 hours ago | parent | prev | next [-] | |||||||
They do, but we call it "hallucination" when that happens. | ||||||||
| ▲ | Rover222 3 hours ago | parent | prev [-] | |||||||
Kind of feels that way | ||||||||