| ▲ | storus 5 hours ago | |
This was obviously a simplification which holds for zero temperature. Obviously top-p-sampling will add some randomness but the probability of unexpected longer sequences goes asymptotically to zero pretty quickly. | ||
| ▲ | eru an hour ago | parent [-] | |
I'm not sure what the point is? A bog standard random number generator or even a flipping coin can produce novel output at will. That's a weird thing to fault LLMs for? Novelty is easy! See also how genetic algorithms and re-inforcement learning constantly solve problems in novel and unexpected ways. Compare also antibiotics resistances in the real world. You don't need smarts for novelty. Where I see the problem is producing output that's both high quality _and_ novel. On command to solve the user's problem. | ||