▲ | mikewarot 2 days ago | |
>The only "anticipation" that is happening within those tools is on token level When you were composing your reply, did you just start typing, then edit and compose your thoughts better a few times before hitting the reply button? I ask, because that's what I do. Most of the time, I never know what the next word is going to be, I just start typing. Sometimes I'll think it out, or even type out a whole screed until I run out of thoughts... then review it several times before hitting "reply". By your logic, I'm no more advanced than any other LLM. I think there's a serious misunderstanding of the depths at which the internal state of the LLM is maintained across token outputs. It's just doing the same thing I do (and I suspect most other people do, which is decide then make up a convincing story that agrees with the decision, on a word by word basis) Other times, when I'm trying to explain something technical or complex, there's a word, or a name I can't remember... it drives me nuts, if I'm in a hurry, I'll just use a synonym that's almost as good, and work around it. Yesterday, for example, it took a while to I remember the name Christopher Walken via the Fat Boy slim video on YouTube. The only difference is we have the ability to edit it first, before the all powerful "reply" button. Then of course, there's edit... but that's like agentic LLMs. | ||
▲ | igor47 2 days ago | parent | next [-] | |
When I type stuff, I usually have an idea of what I want to communicate, the language and tone I'm going for based on the context of the conversation, a mental model of my audience, and a goal or set of goals for what I'm trying to accomplish. It's pretty rare that I'm literally just generating one word at a time. YMMV | ||
▲ | DauntingPear7 2 days ago | parent | prev [-] | |
L take |