Remix.run Logo
VikingCoder 2 days ago

In grad school, I wrote an ant simulator. There was a 2D grid of squares. I put ant food all over it, in hard-coded locations. Then I had a neural network for an ant. The inputs were "is there any food to the left? to the diagonal left? straight ahead? to the diagonal right? to the right?" The outputs were "turn left, move forward, turn right."

Then I had a multi-layer network - I don't remember how many layers.

Then I was using a simple Genetic Algorithm to try to set the weights.

Essentially, it was like breeding up a winner for the snake game - but you always know where all of the food is, and the ant always started in the same square. I was trying to maximize the score for how many food items the ant would eventually find.

In retrospect, it was pretty stupid. Too much of it was hard-coded, and I didn't have near enough middle layers to do anything really interesting. And I was essentially coming up with a way to not have to do back-propagation.

At the time, I convinced myself I was selecting for instinctive knowledge...

And I was very excited by research that said that, rather than having one pool of 10,000 ants...

It was better to have 10 islands of 1,000 ants, and to occasionally let genetic information travel from one island to another island. The research claimed the overall system would converge faster.

I thought that was super cool, and made me excited that easy parallelism would be rewarded.

I daydream about all of that, still.