| ▲ | mrandish 3 hours ago | |
> Can such an algorithm reason about itself in relation to others? No, but an LLM doesn't do that either. An LLM is an algorithm to generate text output which can simulate how humans describe reasoning about themselves in relation to others. Humans do that by using words to describe what they internally experienced. LLMs do it by calculating the statistical weight of linguistic symbols based on a composite of human-generated text samples in its training data. LLMs never experienced what their textual output is describing. It's more similar to a pocket calculator calculating symbols in relation to other symbols, except scaled up massively. | ||
| ▲ | digitaltrees 2 hours ago | parent | next [-] | |
Toddlers learn over the course of several years of observing training data and for the first few years misspeak about themselves and others. What’s the difference? | ||
| ▲ | digitaltrees 2 hours ago | parent | prev [-] | |
How are you sure it doesn’t reason about itself? The grammar of languages encode the concepts of self and others. LLMs operate with those grammar structures and do so in increasingly accurate ways. Why would we say humans that exhibit the same behavior are inherently more likely to be conscious? | ||