▲ | ornornor 4 days ago | |||||||
To me, understanding the world requires experiencing reality. LLMs dont experience anything. They’re just a program. You can argue that living things are also just following a program but the difference is that they (and I include humans in this) experience reality. | ||||||||
▲ | perching_aix 4 days ago | parent | next [-] | |||||||
But they're experiencing their training data, their pseudo-randomness source, and your prompts? Like, to put it in perspective. Suppose you're training a multimodal model. Training data on the terabyte scale. Training time on the weeks scale. Let's be optimistic and assume 10 TB in just a week: that is 16.5 MB/s of avg throughput. Compare this to the human experience. VR headsets are aiming for what these days, 4K@120 per eye? 12 GB/s at SDR, and that's just vision. We're so far from "realtime" with that optimistic 16.5 MB/s, it's not even funny. Of course the experiencing and understanding that results from this will be vastly different. It's a borderline miracle it's any human-aligned. Well, if we ignore lossy compression and aggressive image and video resizing, that is. | ||||||||
| ||||||||
▲ | CamperBob2 4 days ago | parent | prev [-] | |||||||
(and I include humans in this) experience reality. A fellow named Plato had some interesting thoughts on that subject that you might want to look into. |