| ▲ | ostinslife 5 hours ago | |
If you define "deceive" as something language models cannot do, then sure, it can't do that. It seems like thats putting the cart before the horse. Algorithmic or stochastic; deception is still deception. | ||
| ▲ | dingnuts 5 hours ago | parent [-] | |
deception implies intent. this is confabulation, more widely called "hallucination" until this thread. confabulation doesn't require knowledge, which as we know, the only knowledge a language model has is the relationships between tokens, and sometimes that rhymes with reality enough to be useful, but it isn't knowledge of facts of any kind. and never has been. | ||