Remix.run Logo
versavolt 6 hours ago

Those are for video. AI Chat workflows use a fraction of the data.

bastawhiz 2 hours ago | parent [-]

That's silly on so many levels.

1. the latency is going to be insane.

2. AI video exists.

3. vLLMa exist and take video and images as input.

4. When a new model checkpoint needs to go up, are we supposed to wait months for it to transfer?

5. A one million token context window is ~4MB. That's a few milliseconds terrestrially. Assuming zero packet loss, that's many seconds

6. You're not using TCP for this because the round trip time is so high. So you can't cancel any jobs if a user disconnects.

7. How do you scale this? How many megabits has anyone actually ever successfully sent per second over the distances in question? We literally don't know how to get a data center worth of throughput to something not in our orbit, let alone more than double digit megabits per second.

versavolt 2 hours ago | parent [-]

Grok doesn’t have video as far as I know. I don’t think it’s so absurd. I don’t know how you scale this. But it seems pretty straightforward.