Remix.run Logo
kentonv 4 days ago

Of course, all programming language APIs even in dynamic languages have some implied type (aka schema). You can't write code against an API without knowing what methods it provides, what their inputs and outputs are, etc. -- and that's a schema, whether or not it's actually written out as such.

But Cap'n Web itself does not need to know about any of that. Cap'n Web just accepts whatever method call you make, sends it to the other end of the connection, and attempts to deliver it. The protocol itself has no idea if your invocation is valid or not. That's what I mean by "schemaless" -- you don't need to tell Cap'n Web about any schemas.

With that said, I strongly recommend using TypeScript with Cap'n Web. As always, TypeScript schemas are used for build-time type checking, but are then erased before runtime. So Cap'n Web at runtime doesn't know anything about your TypeScript types.

random3 4 days ago | parent [-]

thank you. So indeed it's, as corrrectly described, schemaless i.e. schema agnostic, which falls into "schema responsibility being passed to user/dev" (I should have picked up what it means when writing that).

So it's basically Stubby/gRPC.

From strictly a RPC perspective this makes sense (i guess to the same degree gRPC would be agnostic to protobuf serialization scheme, which IIRC is the case (also thinking Stubby was called that for the same reason)).

However, that would mean some there's

1. a ton of responsibility on the user/dev —i.e. the same amount that prompted protobuf to exist, afterall.

You basically have the (independent problem of) clients, servers and data (in fligiht, or even persisted) that get different versions of the schema.

2. a missied implicit compression opportunity? IDK to what extent this actually happens on the fly or not.

kentonv 4 days ago | parent [-]

> So it's basically Stubby/gRPC.

Stubby / gRPC do not support object capabilities, though. I know that's not what you meant but I have to call it out because this is a huuuuuuuge difference between Cap'n Proto/Web vs. Stubby/gRPC.

> a ton of responsibility on the user/dev —i.e. the same amount that prompted protobuf to exist, afterall.

In practice, people should use TypeScript to specify their Cap'n Web APIs. For people working in TypeScript to start with, this is much nicer than having to learn a separate schema format. And the protocol evolution / compatibility problem becomes the same as evolving a JavaScript library API with source compatibility, which is well-understood.

> a missied implicit compression opportunity? IDK to what extent this actually happens on the fly or not.

Don't get me wrong, I love binary protocols for their efficiency.

But there are a bunch of benefits to just using JSON under the hood, especially in a browser.

Note that WebSocket in most browsers will automatically negotiate compression, where the compression context is preserved over the whole connection (not just one message at a time), so if you are sending the same property names a lot, they will be compressed out.

Degorath 3 days ago | parent [-]

Not the person you were discussing with, but I have to add that to me the main benefit of using Stubby et al. was exactly the schema that was so nicely searchable.

I currently work in a place where the server-server API clients are generated based on TypeScript API method return types, and it's.. not great. The reality of this situation quickly devolves the types using "extends" from a lot of internal types that are often difficult to reason about.

I know that it's possible for the ProtoBuf types to also push their tendrils quite deep into business code, but my personal experience has been a lot less frustrating with that than the TypeScript return type being generated into an API client.