Remix.run Logo
wzdd 7 hours ago

They've basically taken three separate models, one for fly vision, one for the fly brain, and one for the fly body, and bolted them together Frankenstein style.

They've taken the connectome, which is a map of how neurons in the brain are connected to each other, and then created a fly brain using artificial spiking neurons connected together using that same connectome.

So the neurons are not remotely accurate. The interesting point they're making is that even with these simplified neurons they still see plausible behaviours (i.e. simulate the presence of sugar by stimulating a gustatory neuron -> neurons associated with lowering the proboscis for feeding are triggered). So they make the case that a lot of information is encoded simply in the connectome.

The body isn't connected up to the brain in the way we'd expect. "Input" comes from a completely separate neural network which they've trained to simulate appropriate CNS neurons, and output is looking at "descending" (efferent I guess) neurons in a very basic way. It's not completely playing an animation, but the level of connectivity is very low dimensional. It's not clear how much control they have, but for example I imagine they have a spiking threshold for the proboscis below which it's lowered and above which it's raised, which is sort of like you being able to stick your tongue completely out or pull it completely in but nothing else.

So it's not especially bioplausible. The most interesting part is that leaky-integrate-and-fire connectome-based brain model which they're using, even though it is also very limited (for example, it doesn't learn).

Demo looks very cool though. Much credit for being so explicit about what's going on in it in that post. And I was immediately filled with ideas about what they could do next to improve it, which to me is a signal of good research.