| ▲ | Frannky 2 days ago | |||||||
I model LLMs as searchers. Give the input search and they match an output. The massiveness of the parameters and training data let them map data in a way searching looks like human thinking. They can also permutate a little and still stay in a space that can overlap with reality. The human brain may be doing a very similar thing though, search and permutation via searched rules. It may be doing it just in a functional way, with more ability to search on massive data that may be with holes but filled with synthetic data via mind subprocesses on learned rules. I think machines can eventually get there, especially if we can figure out how to harness continuous models instead of discrete ones. And I have a feeling that functional analysis may be the key. | ||||||||
| ▲ | fy20 2 days ago | parent [-] | |||||||
It's an interesting way to think about it. For every word you say, every message you write, every task you do, every thought you have, every subtle cue you give, there is a statistically best response / follow up / output. And all of that can distilled and stored into such a small amount of data. If that's really how consciousness works in our mind (just another representation of "output") it's fascinating. The repercussions though could be concerning. On one hand it means things like consciousness upload will be possible. On the other hand it means security agencies can monitor people and figure out who is (literally) committing thought crime. They'd just need to search the space and figure out what weights a person's internal model runs on - and you wouldn't actually need that much reference material to do it. Basically Minority Report. | ||||||||
| ||||||||