▲ | kentonv 4 days ago | ||||||||||||||||
Of course, all programming language APIs even in dynamic languages have some implied type (aka schema). You can't write code against an API without knowing what methods it provides, what their inputs and outputs are, etc. -- and that's a schema, whether or not it's actually written out as such. But Cap'n Web itself does not need to know about any of that. Cap'n Web just accepts whatever method call you make, sends it to the other end of the connection, and attempts to deliver it. The protocol itself has no idea if your invocation is valid or not. That's what I mean by "schemaless" -- you don't need to tell Cap'n Web about any schemas. With that said, I strongly recommend using TypeScript with Cap'n Web. As always, TypeScript schemas are used for build-time type checking, but are then erased before runtime. So Cap'n Web at runtime doesn't know anything about your TypeScript types. | |||||||||||||||||
▲ | random3 4 days ago | parent [-] | ||||||||||||||||
thank you. So indeed it's, as corrrectly described, schemaless i.e. schema agnostic, which falls into "schema responsibility being passed to user/dev" (I should have picked up what it means when writing that). So it's basically Stubby/gRPC. From strictly a RPC perspective this makes sense (i guess to the same degree gRPC would be agnostic to protobuf serialization scheme, which IIRC is the case (also thinking Stubby was called that for the same reason)). However, that would mean some there's 1. a ton of responsibility on the user/dev —i.e. the same amount that prompted protobuf to exist, afterall. You basically have the (independent problem of) clients, servers and data (in fligiht, or even persisted) that get different versions of the schema. 2. a missied implicit compression opportunity? IDK to what extent this actually happens on the fly or not. | |||||||||||||||||
|