| ▲ | hackinthebochs 3 days ago |
| This is a bad take. We didn't write the model, we wrote an algorithm that searches the space of models that conform to some high level constraints as specified by the stacked transformer architecture. But stacked transformers are a very general computational paradigm. The training aspect converges the parameters to a specific model that well reproduces the training data. But the computational circuits the model picks out are discovered, not programmed. The emergent structures realize new computational dynamics that we are mostly blind to. We are not the programmers of these models, rather we are their incubators. As far as sentience is concerned, we can't say they aren't sentient because we don't know the computational structures these models realize, nor do we know the computational structures required for sentience. |
|
| ▲ | almosthere 3 days ago | parent [-] |
| However there is another big problem, this would require a blob of data in a file to be labelled as "alive" even if it's on a disk in a garbage dump with no cpu or gpu anywhere near it. The inference software that would normally read from that file is also not alive, as it's literally very concise code that we wrote to traverse through that file. So if the disk isn't alive, the file on it isn't alive, the inference software is not alive - then what are you saying is alive and thinking? |
| |
| ▲ | hackinthebochs 3 days ago | parent | next [-] | | This is an overly reductive view of a fully trained LLM. You have identified the pieces, but you miss the whole. The inference code is like a circuit builder, it represents the high level matmuls and the potential paths for dataflow. The data blob as the fully converged model configures this circuit builder in the sense of specifying the exact pathways information flows through the system. But this isn't some inert formalism, this is an active, potent causal structure realized by the base computational substrate that is influencing and being influenced by the world. If anything is conscious here, it would be this structure. If the computational theory of mind is true, then there are some specific information dynamics that realize consciousness. Whether or not LLM training finds these structures is an open question. | |
| ▲ | goatlover 3 days ago | parent | prev | next [-] | | A similar point was made by Jaron Lanier in his paper, "You can't argue with a Zombie". | |
| ▲ | electrograv 3 days ago | parent | prev [-] | | > So if the disk isn't alive, the file on it isn't alive, the inference software is not alive - then what are you saying is alive and thinking? “So if the severed head isn’t alive, the disembodied heart isn’t alive, the jar of blood we drained out isn’t alive - then what are you saying is alive and thinking?” - Some silicon alien life forms somewhere debating whether the human life form they just disassembled could ever be alive and thinking | | |
| ▲ | almosthere 2 days ago | parent [-] | | Just because you saw a "HA - He used an argument that I can compare to a dead human" does not make your argument strong - there are many differences from a file on a computer vs a murdered human that will never come back and think again. |
|
|