| ▲ | xyzzy123 2 days ago | |||||||||||||||||||||||||
Maybe? But it also seems like you are that you are not accounting for new information at inference time. Let's pretend I agree the LLM is a plagiarism machine that can produce no novelty in and of itself that didn't come from what it was trained on, and produces mostly garbage (I only half agree lol, and I think "novelty" is under-specified here). When I apply that machine (with its giant pool of pirated knowledge) _to my inputs and context_ I can get results applicable to my modestly novel situation which is not in the training data. Perhaps the output is garbage. Naturally if my situation is way out of distribution I cannot expect very good results. But I often don't care if the results are garbage some (or even most!) of the time if I have a way to ground-truth whether they are useful to me. This might be via running a compile, a test suite, a theorem prover or mk1 eyeball. Of course the name of the game is to get agents to do this themselves and this is now fairly standard practice. | ||||||||||||||||||||||||||
| ▲ | measurablefunc 2 days ago | parent [-] | |||||||||||||||||||||||||
I'm not here to convince you whether Markov chains are helpful for your use cases or not. I know from personal experience that even in cases where I have a logically constrained query I will receive completely nonsensical responses¹. ¹https://chatgpt.com/share/69367c7a-8258-8009-877c-b44b267a35... | ||||||||||||||||||||||||||
| ||||||||||||||||||||||||||