Remix.run Logo
pipo234 8 hours ago

I understand some of the appeal of grpc, but resumable uploads and download offsets have long be part of plain http. (E.g. RFC 7233)

Relying on http has the advantage that you can leverage commodity infrastructure like caching proxies and CDN.

Why push protobuf over http when all you need is present in http already?

avianlyric 8 hours ago | parent | next [-]

Because you may already have robust and sensible gRPC infrastructure setup and working, and setting up the correct HTTP infrastructure to take advantage of all the benefits that plain old HTTP provides may not be worth it.

If moving big files around is a major part of the system you’re building, then it’s worth the effort. But if you’re only occasionally moving big files around, then reusing your existing gRPC infrastructure is likely preferable. Keeps your systems nice and uniform, which make it easier to understand later once you’ve forgotten what you originally implemented.

pipo234 8 hours ago | parent | next [-]

Simplicity makes sense, of course. I just hadn't considered a grpc-only world. But I guess that makes sense in today's Kubernetes/node/python/llm world where grpc is the glue that once was SOAP (or even CORBA).

Still, stateful protocols have a tendency to bite when you scale up. And HTTP is specifically designed to be stateless and you get scalability for free as long as you stick with plain GET requests...

jayd16 6 hours ago | parent | prev | next [-]

gRPC runs over http. What infra would be missing?

If you happen to be on ASP.NET or Spring Boot its some boilerplate to stand up a plain http and gRPC endpoints side by side but I guess you could be running something more exotic than that.

hpdigidrifter 5 hours ago | parent [-]

http/2 is nothing like http/1

feel free to put them both behind load balancers and see how you go

a-dub 8 hours ago | parent | prev [-]

this.

also, http/s compatibility falls off in the long tail of functionality. i've seen cache layers fail to properly implement restartable http.

that said, making long transfers actually restartable, robust and reliable is a lot more work than is presented here.

chasil 5 hours ago | parent [-]

Is see that QUIC file transfer protocols are available, including a Microsoft SMB implementation.

These would be the ultimate in resumability and mobility between networks, assuming that they exploit the protocol to the fullest.

sluongng 7 hours ago | parent | prev [-]

The evolving schema is much more attractive than a bunch of plain text HTTP headers when you want to communicate additional metadata with the file download/upload.

For example, there are common metadata such as the digest (hash) of the blob, the compression algorithm, the base compression dictionary, whether Reed-Solomon is applicable or not, etc...

And like others have pointed out, having existing grpc infrastructure in place definitely helps using it a lot easier.

But yeah, it's a tradeoff.