▲ | adastra22 3 days ago | ||||||||||||||||
LLMs have loops. The output is fed back in for the next prediction cycle. How is that not the same thing? | |||||||||||||||||
▲ | MadnessASAP 2 days ago | parent [-] | ||||||||||||||||
Wish I had a great answer for you but I don't. It certainly allows for more thought-like LLMs with the reasoning type models. I guess the best answer is that the loop only happens at a single discrete place and doesn't carry any of the internal layer context across. Another answer might be, how many comments did you read today and not reply too? Did you write a comment by putting down a word and then deciding what the next one should be? Or did you have a full thought in mind before you even began typing a reply? So, how is it not the same thing? Because it isn't | |||||||||||||||||
|