Remix.run Logo
adastra22 2 days ago

It feels like the same thing to me…

MadnessASAP a day ago | parent [-]

With more coffee on me, another way to put it would be to say the neural networks in a LLM use dense layers where every neuron takes the output from every neuron in the previous layer and outputs to every neuron in the next layer.

A brain doesn't have layers and uses sparse connections, any neuron can connect to any other neuron (but not ever other neuron). You can recreate this structure on a computer but how do you decide where your inputs and outputs are? How do you train it? Since it never halts how do you know when to take the output?

There's a reason CS loves its graphs directed and acyclic, they're a lot easier to reason about that way.