Remix.run Logo
mikepavone 9 hours ago

> When the network is bad, you get... fewer JPEGs. That’s it. The ones that arrive are perfect.

This would make sense... if they were using UDP, but they are using TCP. All the JPEGs they send will get there eventually (unless the connection drops). JPEG does not fix your buffering and congestion control problems. What presumably happened here is the way they implemented their JPEG screenshots, they have some mechanism that minimizes the number of frames that are in-flight. This is not some inherent property of JPEG though.

> And the size! A 70% quality JPEG of a 1080p desktop is like 100-150KB. A single H.264 keyframe is 200-500KB. We’re sending LESS data per frame AND getting better reliability.

h.264 has better coding efficiency than JPEG. For a given target size, you should be able to get better quality from an h.264 IDR frame than a JPEG. There is no fixed size to an IDR frame.

Ultimately, the problem here is a lack of bandwidth estimation (apart from the sort of binary "good network"/"cafe mode" thing they ultimately implemented). To be fair, this is difficult to do and being stuck with TCP makes it a bit more difficult. Still, you can do an initial bandwidth probe and then look for increasing transmission latency as a sign that the network is congested. Back off your bitrate (and if needed reduce frame rate to maintain sufficient quality) until transmission latency starts to decrease again.

WebRTC will do this for you if you can use it, which actually suggests a different solution to this problem: use websockets for dumb corporate network firewall rules and just use WebRTC everything else

auxiliarymoose 8 hours ago | parent | next [-]

They shared the polling code in the article. It doesn't request another jpeg until the previous one finishes downloading. UDP is not necessary to write a loop.

mikepavone 7 hours ago | parent | next [-]

> They shared the polling code in the article. It doesn't request another jpeg until the previous one finishes downloading.

You're right, I don't know how I managed to skip over that.

> UDP is not necessary to write a loop.

True, but this doesn't really have anything to do with using JPEG either. They basically implemented a primitive form of rate control by only allowing a single frame to be in flight at once. It was easier for them to do that using JPEG because they (to their own admission) seem to have limited control over their encode pipeline.

londons_explore 6 hours ago | parent [-]

> have limited control over their encode pipeline.

Frustratingly this seems common in many video encoding technologies. The code is opaque, often has special kernel, GPU and hardware interfaces which are often closed source, and by the time you get to the user API (native or browser) it seems all knobs have been abstracted away and simple things like choosing which frame to use as a keyframe are impossible to do.

I had what I thought was a simple usecase for a video codec - I needed to encode two 30 frame videos as small as possible, and I knew the first 15 frames were common between the videos so I wouldn't need to encode that twice.

I couldn't find a single video codec which could do that without extensive internal surgery to save all internal state after the 15th frame.

orisho 5 hours ago | parent | next [-]

A 15 frame min anf max GOP size would do the trick, then you'd get two 15 frame GOPs. Each GOP can be concatenated with another GOP with the same properties (resolution, format, etc) as if they were independent streams. So there is actually a way to do this. This is how video splitting and joining without re encoding works, at GOP boundary.

londons_explore 5 hours ago | parent [-]

In my case, bandwidth really mattered, so I wanted all one GOP.

Ended up making a bunch of patches o libx264 to do it, but the compute cost of all the encoding on CPU is crazy high. On the decode side (which runs on consumer devices), we just make the user decode the prefix many times.

6r17 2 hours ago | parent | prev | next [-]

I wonder if we could scan / test / dig these hidden features somehow ; like in a scrapping / fuzz fashion

Sesse__ 6 hours ago | parent | prev [-]

> I couldn't find a single video codec which could do that without extensive internal surgery to save all internal state after the 15th frame.

fork()? :-)

But most software, video codec or not, simply isn't written to serialize its state at arbitrary points. Why would it?

londons_explore 5 hours ago | parent [-]

A word processor can save it's state at an arbitrary point... That's what the save button is for, and it's functional at any point in the document writing process!

In fact, nearly everything in computing is serializable - or if it isn't, there is some other project with a similar purpose which is.

However this is not the case with video codecs - but this is just one of many examples of where the video codec landscape is limiting.

Another for example is that on the internet lots of videos have a 'poster frame' - often the first frame of the video. That frame for nearly all usecases ends up downloaded twice - once as a jpeg, and again inside the video content. There is no reasonable way to avoid that - but doing so would reduce the latency to play videos by quite a lot!

cma 6 hours ago | parent | prev [-]

So US->Australia/Asia wouldn't that limit you to 6fps or so due half-rtt? Each time a frame finishes arriving you have 150ms or so for your new request to reach.

littlestymaar 5 hours ago | parent [-]

That sounds find for most screen sharing use-case.

nazgul17 21 minutes ago | parent | prev | next [-]

Regarding the encoding efficiency, I imagine the problem is that the compromise in quality shows in the space dimension (aka fewer or blurry pixels) rather than in time. Users need to read text clearly, so the compromise in the time dimension (fewer frames) sounds just fine.

eichin 8 hours ago | parent | prev | next [-]

Probably either (1) they don't request another jpeg until they have the previous one on-screen (so everything is completely serialized and there are no frames "in-flight" ever) (2) they're doing a fresh GET for each and getting a new connection anyway (unless that kind of thing is pipelined these days? in which case it still falls back to (1) above.)

01HNNWZ0MV43FF 8 hours ago | parent [-]

You can still get this backpressure properly even if you're doing it push-style. The TCP socket will eventually fill up its buffer and start blocking your writes. When that happens, you stop encoding new frames until the socket is able to send again.

The trick is to not buffer frames on the sender.

mikepavone 7 hours ago | parent [-]

You probably won't get acceptable latency this way since you have no control over buffer sizes on all the boxes between you and the receiver. Buffer bloat is a real problem. That said, yeah if you're getting 30-45 seconds behind at 40 Mbps you've probably got a fair bit of sender-side buffering happening.

chrisweekly 4 hours ago | parent | prev [-]

Related tangent: it's remarkable to me how a given jpeg can be literally visually indistinguishable from another (by a human on a decent monitor) yet consist of 10-15% as many bytes. I got pretty deep into web performance and image optimization in the late 2000s and it was gratifying to have so much low-hanging fruit.