Remix.run Logo
fy20 2 days ago

It's an interesting way to think about it. For every word you say, every message you write, every task you do, every thought you have, every subtle cue you give, there is a statistically best response / follow up / output.

And all of that can distilled and stored into such a small amount of data. If that's really how consciousness works in our mind (just another representation of "output") it's fascinating.

The repercussions though could be concerning. On one hand it means things like consciousness upload will be possible. On the other hand it means security agencies can monitor people and figure out who is (literally) committing thought crime. They'd just need to search the space and figure out what weights a person's internal model runs on - and you wouldn't actually need that much reference material to do it. Basically Minority Report.

Frannky 2 days ago | parent [-]

I think you are mixing two concepts. I was just talking about having an LLM that is able to replicate human thinking, which is different then having a precise person's brains turned into LLM weights.

In that second case the problems you are saying emerge. But I can understand why you conflate the two, since having a model that works like a human may unlock the ability to dump the brain into model weights.