▲ | coldtea 3 days ago | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
>there is really only one usable dataset: the world itself, which cannot be compacted or fed into a computer at high speed. Why wouldn't it be? If the world is ingressed via video sensors and lidar sensor, what's the hangup in recording such input and then replaying it faster? | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
▲ | psb217 3 days ago | parent | next [-] | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
I think there's an implicit assumption here that interaction with the world is critical for effective learning. In that case, you're bottlenecked by the speed of the world... when learning with a single agent. One neat thing about artificial computational agents, in contrast to natural biological agents, is that they can share the same brain and share lived experience, so the "speed of reality" bottleneck is much less of an issue. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
▲ | otodus 2 days ago | parent | prev [-] | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
How would you handle olfactory and proprioceptive data? |