▲ | astrange 3 days ago | |
> no, it's not. it's capable of assembling words that are likely to appear near other words in a way that you can occasionally process yourself as a coherent thought. It doesn't emit words at all. It emits subword tokens. The fact that it can assemble words from them (let alone sentences) shows it's doing something you're not giving it credit for. > literally the most average-possible advice "Average" is clearly magical thinking here. The "average" text would be the letter 'e'. And the average response from a base model LLM isn't the answer to a question, it's another question. | ||
▲ | GuinansEyebrows 3 days ago | parent [-] | |
i'm comfortable enough including the backend process of assembling strings that appear to be words in the general description of "assembling words". re: average - that's at a character level, not the string level or the conceptual level that these tools essentially emulate. basically nobody would interpret "eeee ee eeeeee eee eeeeeeee eee ee" as any type of recognizable organized communication (besides dolphins). |