| ▲ | fasterik 3 hours ago | |
It has been proven that recurrent neural networks are Turing complete [0]. So for every computable function, there is a neural network that computes it. That doesn't say anything about size or efficiency, but in principle this allows neural networks to simulate a wide range of intelligent and creative behavior, including the kind of extrapolation you're talking about. [0] https://www.sciencedirect.com/science/article/pii/S002200008... | ||
| ▲ | legulere an hour ago | parent | next [-] | |
I think you cannot take the step from any turing machine being representable as a neural network to say anything about the prowess of learned neural networks instead of specifically crafted ones. I think a good example are calculations or counting letters: it's trivial to write turing machines doing that correctly, so you could create neural networks, that do just that. From LLM we know that they are bad at those tasks. | ||
| ▲ | gmueckl an hour ago | parent | prev [-] | |
Turing conpleteness is not associated with crativity or intelligence in any ateaightforward manner. One cannot unconditionally imply the other. | ||