| ▲ | random3 4 days ago |
| It's inspired by and created by a coauthor of [Cap'n Proto](https://capnproto.org), which is also what OCapN (referenced in a separate comment) name refers to. Cap'n Proto is inspired by ProtoBuf, protobuf has gRPC and gRPC web. We've been using ProtoBuf/gRPC/gRPC-web both in the backends and for public endpoints powering React / TS UI's, at my last startup. It worked great, particularly with the GCP Kubernetes infrastructure. Basically both API and operational aspects were non-problems. However, navigating the dumpster fire around protobuf, gRPC, gRPC web with the lack of community leadership from Google was a clusterfuck. This said, I'm a bit at loss with the meaning of schemaless. You can have different approaches wrt schema (see Avro vs ProtoBuf) but otherwise, can't fundamentally eschew schema/types. It's purely information tied to a communication channel that needs to be somewhere, whether that's explicit, implicit, handled by the RCP layer, passed to the type system, or worse all the way to the user/dev. Moreover, schemas tend to evolve and any protocol needs to take that into account. Historically, ProtoBuf has done a good job managing various tradeoffs, here but had no experience using Capt'n Proto, yet seen mostly good stuff about it, so perhaps I'm just missing something here. |
|
| ▲ | kentonv 4 days ago | parent | next [-] |
| Of course, all programming language APIs even in dynamic languages have some implied type (aka schema). You can't write code against an API without knowing what methods it provides, what their inputs and outputs are, etc. -- and that's a schema, whether or not it's actually written out as such. But Cap'n Web itself does not need to know about any of that. Cap'n Web just accepts whatever method call you make, sends it to the other end of the connection, and attempts to deliver it. The protocol itself has no idea if your invocation is valid or not. That's what I mean by "schemaless" -- you don't need to tell Cap'n Web about any schemas. With that said, I strongly recommend using TypeScript with Cap'n Web. As always, TypeScript schemas are used for build-time type checking, but are then erased before runtime. So Cap'n Web at runtime doesn't know anything about your TypeScript types. |
| |
| ▲ | random3 4 days ago | parent [-] | | thank you. So indeed it's, as corrrectly described, schemaless i.e. schema agnostic, which falls into "schema responsibility being passed to user/dev" (I should have picked up what it means when writing that). So it's basically Stubby/gRPC. From strictly a RPC perspective this makes sense (i guess to the same degree gRPC would be agnostic to protobuf serialization scheme, which IIRC is the case (also thinking Stubby was called that for the same reason)). However, that would mean some there's 1. a ton of responsibility on the user/dev —i.e. the same amount that prompted protobuf to exist, afterall. You basically have the (independent problem of) clients, servers and data (in fligiht, or even persisted) that get different versions of the schema. 2. a missied implicit compression opportunity?
IDK to what extent this actually happens on the fly or not. | | |
| ▲ | kentonv 4 days ago | parent [-] | | > So it's basically Stubby/gRPC. Stubby / gRPC do not support object capabilities, though. I know that's not what you meant but I have to call it out because this is a huuuuuuuge difference between Cap'n Proto/Web vs. Stubby/gRPC. > a ton of responsibility on the user/dev —i.e. the same amount that prompted protobuf to exist, afterall. In practice, people should use TypeScript to specify their Cap'n Web APIs. For people working in TypeScript to start with, this is much nicer than having to learn a separate schema format. And the protocol evolution / compatibility problem becomes the same as evolving a JavaScript library API with source compatibility, which is well-understood. > a missied implicit compression opportunity? IDK to what extent this actually happens on the fly or not. Don't get me wrong, I love binary protocols for their efficiency. But there are a bunch of benefits to just using JSON under the hood, especially in a browser. Note that WebSocket in most browsers will automatically negotiate compression, where the compression context is preserved over the whole connection (not just one message at a time), so if you are sending the same property names a lot, they will be compressed out. | | |
| ▲ | Degorath 3 days ago | parent [-] | | Not the person you were discussing with, but I have to add that to me the main benefit of using Stubby et al. was exactly the schema that was so nicely searchable. I currently work in a place where the server-server API clients are generated based on TypeScript API method return types, and it's.. not great. The reality of this situation quickly devolves the types using "extends" from a lot of internal types that are often difficult to reason about. I know that it's possible for the ProtoBuf types to also push their tendrils quite deep into business code, but my personal experience has been a lot less frustrating with that than the TypeScript return type being generated into an API client. |
|
|
|
|
| ▲ | dannyobrien 4 days ago | parent | prev | next [-] |
| I think rather than related to each other, Cap'n and OCapN are both references to object capabilities, aka ocaps. (Insert joke about unforgeable references here) |
|
| ▲ | chrisweekly 4 days ago | parent | prev [-] |
| 100% agreed (as will anyone sane who's tried to use it), grpc-web is a trainwreck. |