| ▲ | hackinthebochs 3 hours ago | |||||||
I'm not exactly sure what you mean by deterministic code but I do think there is an obvious distinction between typical code people write and what human minds do. The guy upthread is definitely wrong in thinking that, e.g. any search or minimax algorithm is thinking. But its important to understand what this distinction is so we can spot when it might no longer apply. To make a long story short, the distinction is that typical programs don't operate on the semantic features of program state, just on the syntactical features. We assign a correspondence with the syntactical program features and their transformations to the real-world semantic features and logical transformations on them. The execution of the program then tells us the outcomes of the logical transformations applied to the relevant semantic features. We get meaning out of programs because of this analogical correspondence. LLMs are a different computing paradigm because they now operate on semantic features of program state. Embedding vectors assign semantic features to syntactical structures of the vector space. Operations on these syntactical structures allow the program to engage with semantic features of program state directly. LLMs engage with the meaning of program state and alter its execution accordingly. It's still deterministic, but its a fundamentally more rich programming paradigm, one that bridges the gap between program state as syntactical structures and the meaning they represent. This is why I am optimistic that current or future LLMs should be considered properly thinking machines. | ||||||||
| ▲ | emodendroket 3 hours ago | parent [-] | |||||||
LLMs are not deterministic at all. The same input leads to different outputs at random. But I think there’s still the question if this process is more similar to thought or a Markov chain. | ||||||||
| ||||||||