Remix.run Logo
coldtea 3 days ago

>there is really only one usable dataset: the world itself, which cannot be compacted or fed into a computer at high speed.

Why wouldn't it be? If the world is ingressed via video sensors and lidar sensor, what's the hangup in recording such input and then replaying it faster?

psb217 3 days ago | parent | next [-]

I think there's an implicit assumption here that interaction with the world is critical for effective learning. In that case, you're bottlenecked by the speed of the world... when learning with a single agent. One neat thing about artificial computational agents, in contrast to natural biological agents, is that they can share the same brain and share lived experience, so the "speed of reality" bottleneck is much less of an issue.

HappMacDonald 2 days ago | parent | next [-]

Yeah I'm envisioning putting a thousand simplistic robotic "infants" into a vast "playpen" to gather sensor data about their environment, for some (probably smaller) number of deep learning models to ingest the input and guess at output strategies (move this servo, rotate this camshaft this far in that direction, etc) and make predictions about resulting changes to input.

In principle a thousand different deep learning models could all train simultaneously on a thousand different robot experience feeds.. but not 1 to 1, but instead 1 to many.. each neural net training on data from dozens or hundreds of the robots at the same time, and different neural nets sharing those feeds for their own rounds of training.

Then of course all of the input data paired with outputs tested and further inputs as ground truth to predictions can be recorded for continued training sessions after the fact.

csullivan107 2 days ago | parent | next [-]

Never thought I’d get to do this but this was my masters research! Simulations are inherently limited and I just got tired of robotic research being done only in simulations. So I built a novel soft robot (notoriously difficult to control) and got it to learn by playing!!

Here is an informal talk I gave on my work. Let me know if you want the thesis

https://www.youtube.com/live/ZXlQ3ppHi-E?si=MKcRqoxmEra7Zrt5

rybosome 2 days ago | parent | prev | next [-]

A very interesting idea. I am curious about this sharing and blending of the various nets; I wonder if something as naive as averaging the weights (assuming the neural nets all have the same dimensions) would actually accomplish that?

loa_in_ 2 days ago | parent | prev [-]

But the playpen will contain objects that are inherently breakable. You cannot rough handle the glass vessel and have it too.

HappMacDonald a day ago | parent | next [-]

Basically everything applicable to the playpen of a human baby is applicable to the playpen of an AI robot baby in this setup, to at least some degree.

Perhaps the least applicable part is that "robot hurting itself" has the liability of some cost to replace the broken robot part, vs the potentially immeasurable cost of a human infant injuring themselves.

If it's not a good idea to put a "glass vessel" in a human crib (strictly from an "I don't want the glass vessel to be damaged" sense) then it's not a good idea to put that in the robot-infant crib either.

Give them something less expensive to repair, like a stack of blocks instead. :P

m-s-y 2 days ago | parent | prev [-]

The world Is breakable. Any model based on it will need to know this anyway. Am I missing your argument?

devenson 2 days ago | parent [-]

Can't reset state after breakage.

hackyhacky 2 days ago | parent | prev [-]

> In that case, you're bottlenecked by the speed of the world

Why not have the AI train on a simulation of the real world? We can build those pretty easily using traditional software and run them at any speed we want.

otodus 2 days ago | parent | prev [-]

How would you handle olfactory and proprioceptive data?