▲ | dpoloncsak 2 days ago | |||||||
Its a complex adaptive system, right? Isn't that the whole idea? We know how each part of the system works by itself. We know all inputs, and can measure outputs. I still (even if I actually understood the math) cannot tell you 'If you prompt 'x', the model will return 'y' with 100% confidence. | ||||||||
▲ | sirwhinesalot 2 days ago | parent [-] | |||||||
> If you prompt 'x', the model will return 'y' with 100% confidence. We can do this for smaller models. Which means it's a problem of scale/computing power rather than a fundamental limitation. The situation with the human brain is completely different. We know neurons exchange information and how that works, and we have a pretty good understanding of the architecture of parts of the brain like the visual cortex, but we have no idea of the architecture as a whole. We know the architecture of an LLM. We know how the data flows. We know what it is the individual neurons are learning (cuts and bends of a plane in a hyperdimensional space). We know how the weights are learned (backpropagation). We know the "algorithm" the LLM as a whole is approximating (List<Token> -> Token). Yes there are emergent properties we don't understand but the same is true of a spam filter. Comparing this to our lack of understanding of the human brain and discussing how these things might be "conscious" is silly. | ||||||||
|