Remix.run Logo
gobdovan 3 hours ago

I think the underlying explanation is that both fields deal with very large state spaces, so the forms converge somewhat.

I think the contrast is more interesting: exact discrete trajectories in cryptography versus approximate continuous function approximation in neural networks.

In cryptography, you usually want a state space so large that nobody can accidentally find, reconstruct, or predict the same path you took.

In neural networks, you want an immense initial search space because NNs need to model the real world, which is highly complex and contains patterns that appear unpredictably. One aspect I think is often overlooked is that NNs are mostly deletive: they start with a very broad representational space and become progressively more specific by discarding what the NN perceives as irrelevant distinctions.

I think this puts the article's point about complexity and mixing in a clearer light. The same class of procedures achieves almost opposite effects. In neural networks, you want mixing so the model can approximate many possible paths at once. In cryptography, you want mixing so the path taken is unpredictable and hard to trace. The key difference is that, for NNs, an approximate path can be good enough. In cryptography, an approximate path is as useless as a very distant one.