Remix.run Logo
lalaithion 7 days ago

Protocol buffers suck but so does everything else. Name another serialization declaration format that both (a) defines which changes can be make backwards-compatibly, and (b) has a linter that enforces backwards compatible changes.

Just with those two criteria you’re down to, like, six formats at most, of which Protocol Buffers is the most widely used.

And I know the article says no one uses the backwards compatible stuff but that’s bizarre to me – setting up N clients and a server that use protocol buffers to communicate and then being able to add fields to the schema and then deploy the servers and clients in any order is way nicer than it is with some other formats that force you to babysit deployment order.

The reason why protos suck is because remote procedure calls suck, and protos expose that suckage instead of trying to hide it until you trip on it. I hope the people working on protos, and other alternatives, continue to improve them, but they’re not worse than not using them today.

jitl 7 days ago | parent | next [-]

Not widely used but I like Typical's approach

https://github.com/stepchowfun/typical

> Typical offers a new solution ("asymmetric" fields) to the classic problem of how to safely add or remove fields in record types without breaking compatibility. The concept of asymmetric fields also solves the dual problem of how to preserve compatibility when adding or removing cases in sum types.

rkagerer 6 days ago | parent | next [-]

More direct link to the juicy bit: https://github.com/stepchowfun/typical?tab=readme-ov-file#as...

An asymmetric field in a struct is considered required for the writer, but optional for the reader.

sdenton4 6 days ago | parent [-]

That's a nice idea... But I believe the design direction of proto buffers was to make everything `optional`, because `required` tends to bite you later when you realize it should actually be optional.

bilkow 6 days ago | parent [-]

My understanding is that asymmetric fields provide a migration path in case that happens, as stated in the docs:

> Unlike optional fields, an asymmetric field can safely be promoted to required and vice versa.

> [...]

> Suppose we now want to remove a required field. It may be unsafe to delete the field directly, since then clients might stop setting it before servers can handle its absence. But we can demote it to asymmetric, which forces servers to consider it optional and handle its potential absence, even though clients are still required to set it. Once that change has been rolled out (at least to servers), we can confidently delete the field (or demote it to optional), as the servers no longer rely on it.

yencabulator 4 days ago | parent | next [-]

> My understanding is that asymmetric fields provide a migration path in case that happens, as stated in the docs:

If you can assume you can churn a generation of fresh data soonish, and never again read the old data. For RPC sure, but someone like Google has petabytes of stored protobufs, so they don't pretend they can upgrade all the writers.

sdenton4 4 days ago | parent | prev [-]

....or we can just say that everything is optional always, and leave it to the servers instead of the protocol to handle irregularities.

summerlight 7 days ago | parent | prev | next [-]

This seems interesting. Still not sure if `required` is a good thing to have (for persistent data like log you cannot really guarantee some field's presence without schema versioning baked into the file itself) but for an intermediate wire use cases, this will help.

cornstalks 7 days ago | parent | prev | next [-]

I've never heard of Typical but the fact they didn't repeat protobuf's sin regarding varint encoding (or use leb128 encoding...) makes me very interested! Thank you for sharing, I'm going to have to give it a spin.

zigzag312 7 days ago | parent [-]

It looks similar to how vint64 lib encodes varints. Total length of varint can be determined via the first byte alone.

haberman 7 days ago | parent [-]

I advocated for PrefixVarint (which seems equivalent to vint64 ) for WebAssembly, but it was decided against, in favor of LEB128: https://github.com/WebAssembly/design/issues/601

The recent CREL format for ELF also uses the more established LEB128: https://news.ycombinator.com/item?id=41222021

At this point I don't feel like I have a clear opinion about whether PrefixVarint is worth it, compared with LEB128.

zigzag312 7 days ago | parent | next [-]

Just remember that XML was more established than JSON for a long time.

kannanvijayan 4 days ago | parent | prev [-]

Varint encoding is something I've peeked at in various contexts. My personal bias is towards the prefix-style, as it feels faster to decode and the segregation of the meta-data from the payload data is nice.

But, the thing that tends to tip the scales is the fact that in almost all real world cases, small numbers dominate - as the github thread you linked relates in a comment.

The LEB128 fast-path is a single conditional with no data-dependencies:

  if ! (x & 0x80) { x }
Modern CPUs will characterize that branch really well and you'll pay almost zero cost for the fastpath which also happens to be the dominant path.

It's hard to beat.

yencabulator 4 days ago | parent [-]

SQLite format equivalent:

  if x <= 240 { x }
while strictly improving all other aspects (at least IMHO)

https://sqlite.org/src4/doc/trunk/www/varint.wiki

zigzag312 7 days ago | parent | prev | next [-]

This actually looks quite interesting.

sevensor 6 days ago | parent | prev | next [-]

Seems like a lot of effort to avoid adding a message version field. I’m not a web guy, so maybe I’m missing the point here, but I always embed a schema version field in my data.

vouwfietsman 6 days ago | parent | next [-]

I get that.

The point is that its hard to prevent asymmetry in message versions if you are working with many communicating systems. Lets say four services inter-communicate with some protocol, it is extremely annoying to impose a deployment order where the producer of a message type is the last to upgrade the message schema, as this causes unnecessary dependencies between the release trains of these services. At the same time, one cannot simply say: "I don't know this message version, I will disregard it" because in live systems this will mean the systems go out of sync, data is lost, stuff breaks, etc.

There's probably more issues I haven't mentioned, but long story short: in live, interconnected systems, it becomes important to have intelligent message versioning, i.e: a version number is not enough.

kiitos 2 days ago | parent | next [-]

> Lets say four services inter-communicate with some protocol, it is extremely annoying to impose a deployment order where the producer of a message type is the last to upgrade the message schema

i don't know how you arrived at this conclusion

the protocol is the unifying substrate, it is the source of truth, the services are subservient to the protocol, it's not the other way around

also it's not just like each service has a single version, each instance of each service can have separate versions as well!

what you're describing as "annoying" is really just "reality", you can't hand-wave away the problems that reality presents

1718627440 2 days ago | parent | prev | next [-]

> one cannot simply say: "I don't know this message version, I will disregard it" because in live systems this will mean the systems go out of sync, data is lost, stuff breaks, etc.

You already need to deal with lost messages, rejected messages, so just treat this case the same. If you have versions surely you have code to deal with mismatches and e.g. fail back to the older version.

sevensor 6 days ago | parent | prev [-]

I think I see what you’re getting at? My mental model is client and server, but you’re implying a more complex topology where no one service is uniquely a server or a client. You’d like to insert a new version at an arbitrary position in the graph without worrying about dependencies or the operational complexity of doing a phased deployment. The result is that you try to maintain a principled, constructive ambiguity around the message schema, hence asymmetrical fields? I guess I’m still unconvinced and I may have started the argument wrong, but I can see a reasonable person doing it that way.

vouwfietsman 6 days ago | parent [-]

Yes thats a big part, but even bigger is just the alignment of teams.

Imagine team A building feature XYZ Team B is building TUV

one of those features in each team deals with messages, the others are unrelated. At some point in time, both teams have to deploy.

If you have to sync them up just to get the protocol to work, thats an extra complexity in the already complex work of the teams.

If you can ignore this, great!

It becomes even more complex with rolling updates though: not all deployments of a service will have the new code immediately, because you want multiple to be online to scale on demand. This creates an immediate necessary ambiguity in the qeustion: "which version does this service accept?" because its not about the service anymore, but about the deployments.

sevensor 6 days ago | parent [-]

Ah, I see. Team A would like to deploy a new version of a service. It used to accept messages with schema S, but the new version accepts only S’ and not S. So the only thing you can do is define S’ so that it is ambiguous with S. Team B uses Team A’s service but doesn’t want to have to coordinate deployments with Team A.

I think the key source of my confusion was Team A not being able to continue supporting schema S once the new version is released. That certainly makes the problem harder.

vouwfietsman 5 days ago | parent [-]

Exactly!

vineyardmike 6 days ago | parent | prev [-]

Idk I generally think “magic numbers” are just extra effort. The main annoyance is adding if statements everywhere on version number instead of checking the data field you need being present.

It also really depends on the scope of the issue. Protos really excel at “rolling” updates and continuous changes instead of fixed APIs. For example, MicroserviceA calls MicroserviceB, but the teams do deployments different times of the week. Constant rolling of the version number for each change is annoying vs just checking for the new feature. Especially if you could have several active versions at a time.

It also frees you from actually propagating a single version number everywhere. If you own a bunch of API endpoints, you either need to put the version in the URL, which impacts every endpoint at once, or you need to put it in the request/response of every one.

sevensor 6 days ago | parent [-]

I think this is only a problem if you’re using a weak data interchange library that can’t use the schema number field to discriminate a union. Because you really shouldn’t have to write that if statement yourself.

atombender 6 days ago | parent | prev [-]

I'm really hoping Typical will catch on, as I quite like the design. One important gap right now is the lack of Go and Python support.

tyleo 7 days ago | parent | prev | next [-]

We use protocol buffers on a game and we use the back compat stuff all the time.

We include a version number with each release of the game. If we change a proto we add new fields and deprecate old ones and increment the version. We use the version number to run a series of steps on each proto to upgrade old fields to new ones.

swiftcoder 7 days ago | parent [-]

> We use the version number to run a series of steps on each proto to upgrade old fields to new ones

It sounds like you've built your own back-compat functionality on top of protobuf?

The only functionality protobuf is giving you here is optional-by-default (and mandatory version numbers, but most wire formats require that)

tyleo 6 days ago | parent [-]

Yeah, I’d probably say something more like, “we leverage protobuf built ins to make a slightly more advanced back compat system”

We do rename deprecated fields and often give new fields their names. We rely on the field number to make that work.

vkou 6 days ago | parent [-]

> We do rename deprecated fields and often give new fields their names. We rely on the field number to make that work.

Why share names? Wouldn't it be safer to, well, not?

tyleo 3 days ago | parent [-]

The code becomes hard to read. You might need to change int health to float health. In that case “health” properly describes the idea. We’d change this to int DEPRECATED_health and float health.

Folks can argue that’s ugly but I’ve not seen one instance of someone confused.

jnwatson 7 days ago | parent | prev | next [-]

ASN.1 implements message versioning in an extremely precise way. Implementing a linter would be trivial.

cryptonector 6 days ago | parent [-]

This. Plus ASN.1 is pluggable as to encoding rules and has a large family of them:

  - BER/DER/CER (TLV)
  - OER and PER ("packed" -- no tags and
                 no lengths wherever
                 possible)
  - XER (XML!)
  - JER (JSON!)
  - GSER (textual representation)
  - you can add your own!
    (One could add one based on XDR,
     which would look a lot like OER/PER
     in a way.)
ASN.1 also gives you a way to do things like formalize typed holes.

Not looking at ASN.1, not even its history and evolution, when creating PB was a crime.

StopDisinfo910 5 days ago | parent [-]

The people who wrote PB clearly knew ASN.1. It was the most famous IDL at the time. Do you assume they just came one morning and decided to write PB without taking a look at what existed?

Anyway, as stated PB does more than ASN.1. It specifies both the description format and the encoding. PB is ready to be used out of the box. You have a compact IDL and a performant encoding format without having to think about anything. You have to remember that PB was designed for internal Google use as a tool to solve their problems, not as a generic solution.

ASN.1 is extremely unwieldy in comparaison. It has accumulated a lot of cruft through the year. Plus they don’t provide a default implementation.

troupo 4 days ago | parent | next [-]

> The people who wrote PB clearly knew ASN.1.

And your assumption is based on what exactly?

> It was the most famous IDL at the time.

Strange that at the same time (2001) people were busy implementing everyting in Java and XML, not ASN.1

> Do you assume they just came one morning and decided to write PB without taking a look at what existed?

Yes, that is a great assumption. Looking at what most companies do, this is an assumption bordering on prescience.

StopDisinfo910 4 days ago | parent [-]

> Strange that at the same time (2001) people were busy implementing everyting in Java and XML, not ASN.1

Yes. Meanwhile Google was designing an IDL with a default binary serialisation format. And this is not 2025 typical big corp, over staffed, fake HR levels heavy Google we are talking about. That’s Google in its heyday. I think you have answered your own comment.

cryptonector 4 days ago | parent | prev [-]

> Do you assume they just came one morning and decided to write PB without taking a look at what existed?

Considering how bad an imitation of 1984 ASN.1 PB's IDL is, and how bad an imitation of 1984 DER PB is, yes I assume that PB's creators did not in fact know ASN.1 well. They almost certainly knew of ASN.1, and they almost certainly did not know enough about it because all the worst mistakes in ASN.1 PB re-created while adding zero new ideas or functionality. It's a terrible shame.

StopDisinfo910 4 days ago | parent [-]

PB is not a bad imitation of 1984 ASN.1. ASN.1 is choke full of useless representations clearly there to serve what a committee thought the need of the telco industry should be.

I find it funny you are making it looks like a good and pleasant to use IDL. It’s a perfect example of design by committee at its worst.

PB is significantly more space efficient than DER by the way.

yearolinuxdsktp 7 days ago | parent | prev | next [-]

I agree that saying that no-one uses backwards compatible stuff is bizarre. Rolling deploys, being able to function with a mixed deployment is often worth the backwards compatibility overhead for many reasons.

In Java, you can accomplish some of this with using of Jackson JSON serialization of plain objects, where there several ways in which changes can be made backwards-compatibly (e.g. in the recent years, post-deserialization hooks can be used to handle more complex cases), which satisfies (a). For (b), there’s no automatic linter. However, in practice, I found that writing tests that deserialize prior release’s serialized objects get you pretty far along the line of regression protection for major changes. Also it was pretty easy to write an automatic round-trip serialization tester to catch mistakes in the ser/deser chain. Finally, you stay away from non-schemable ser/deser (such as a method that handles any property name), which can be enforced w/ a linter, you can output the JSON schema of your objects to committed source. Then any time the generated schema changes, you can look for corresponding test coverage in code reviews.

I know that’s not the same as an automatic linter, but it gets you pretty far in practice. It does not absolve you from cross-release/upgrade testing, because serialization backwards-compatibility does not catch all backwards-compatibility bugs.

Additionally, Jackson has many techniques, such as unwrapping objects, which let you execute more complicated refactoring backwards-compatibly, such as extracting a set of fields into a sub-object.

I like that the same schema can be used to interact with your SPA web clients for your domain objects, giving you nice inspectable JSON. Things serialized to unprivileged clients can be filtered with views, such that sensitive fields are never serialized, for example.

You can generate TypeScript objects from this schema or generate clients for other languages (e.g. with Swagger). Granted it won’t port your custom migration deserialization hooks automatically, so you will either have to stay within a subset of backwards-compatible changes, or add custom code for each client.

You can also serialize your RPC comms to a binary format, such as Smile, which uses back-references for property names, should you need to reduce on-the-wire size.

It’s also nice to be able to define Jackson mix-ins to serialize classes from other libraries’ code or code that you can’t modify.

mattnewton 7 days ago | parent | prev | next [-]

Exactly, I think of protobuffers like I think of Java or Go - at least they weren’t writing it in C++.

Dragging your org away from using poorly specified json is often worth these papercuts IMO.

const_cast 7 days ago | parent | next [-]

Protobufs are better but not best. Still, by far, the easiest thing to use and the safest is actual APIs. Like, in your application. Interfaces and stuff.

Obviously if your thing HAS to communicate over the network that's one thing, but a lot of applications don't. The distributed system micro service stuff is a choice.

Guys, distributed systems are hard. The extremely low API visibility combined with fragile network calls and unsafe, poorly specified API versioning means your stuff is going to break, and a lot.

Want a version controlled API? Just write in interface in C# or PHP or whatever.

motorest 6 days ago | parent [-]

> Protobufs are better but not best.

This sort of comments doesn't add anything to the discussion unless you are able to point out what you believe to be the best. It reads as an unnecessary and unsubstantiated put-down.

const_cast 3 days ago | parent [-]

I... did.

anonymousiam 6 days ago | parent | prev [-]

The original RPC code, from which Google derived their protobuf stuff was written in (pre-ANSI) C at Sun Microsystems.

tshaddox 7 days ago | parent | prev | next [-]

> Name another serialization declaration format that both (a) defines which changes can be make backwards-compatibly, and (b) has a linter that enforces backwards compatible changes.

The article covers this in the section "The Lie of Backwards- and Forwards-Compatibility." My experience working with protocol buffers matches what the author describes in this section.

the__alchemist 6 days ago | parent | prev | next [-]

This is always the thing to look for; "What are the alternatives?", and/why aren't there better ones.

I don't understand most use cases of protobufs, including ones that informed their design. I use it for ESP-hosted, to communicate between two MCUs. It is the highest-friction serialization protocol I've seen, and is not very byte-efficient.

Maybe something like the specialized serialization libraries (bincode, postcard etc) would be easier? But I suspect I'm missing something about the abstraction that applies to networked systems, beyond serialization.

tgma 7 days ago | parent | prev | next [-]

> And I know the article says no one uses the backwards compatible stuff but that’s bizarre to me – setting up N clients and a server that use protocol buffers to communicate and then being able to add fields to the schema and then deploy the servers and clients in any order is way nicer than it is with some other formats that force you to babysit deployment order.

Yet the author has the audacity to call the authors of protobuf (originally Jeff Dean et al) "amateurs."

jcgrillo 7 days ago | parent | prev | next [-]

As someone who has written many mapreduce jobs over years old protobufs I can confidently report the backwards compatibility made it possible at all.

noitpmeder 7 days ago | parent | prev | next [-]

Not that I love it -- but SBE (Simple Binary Encoding) is a _decent_ solution in the realm of backwards/forwards compatibility.

maximilianburke 7 days ago | parent | prev | next [-]

Flatbuffers satisfies those requirements and doesn’t have varint shenanigans.

leoc 7 days ago | parent | next [-]

What about Cap’n Proto https://capnproto.org/ ? (Don't know much about these things myself, but it's a name that usually comes up in these discussions.)

usrnm 6 days ago | parent [-]

Cap'n'proto is not very nice to work with in C++, and I'd discourage anyone from using it from other programming languages, the implementations are just not there yet. We use both cnp and protobufs at work, and I vastly prefer protobufs, even for C++. I only wish they stayed the hell away from abseil, though.

yencabulator 4 days ago | parent | next [-]

The developer experience of capnproto is pretty darn miserable. I replaced my Rust use of it with https://rkyv.org/ -- probably the biggest ergonomic improvement was a single validation after which the message is safe to look at, instead of errors on every code path. The biggest downside was loss of built-in per-message schema evolution; in my use case I can have one version number up front.

porridgeraisin 6 days ago | parent | prev [-]

I always thought people had a positive view on abseil, never used it myself other than when tinkering on random projects. What's the main issue?

usrnm 6 days ago | parent [-]

The thing is a huge pain to manage as a dependency, especially if you wander away from the official google-approved way of doing things. Protobuf went from a breeze to use to the single most common source of build issues in our cross-platform project the moment they added this dependency. It's so bad that many distros and package managers keep the pre-abseil version as a separate package, and many just prefer to get stuck with it rather than upgrade. Same with other google libraries that added abseil as a dependency, as far as I'm aware

mkoubaa 6 days ago | parent | next [-]

I'd rather they just used the abseil headers they needed with the abseil license at the top than make it a build dependency.

The concept of a package is antithetical to C++ and no amount of tooling can fix that.

usrnm 6 days ago | parent [-]

abseil is not header-only, though

mkoubaa 4 days ago | parent [-]

Skill issue

jjmarr 6 days ago | parent | prev [-]

I like abseil besides the compile times. Not having to specialize my own hash when using maps is nice.

AYBABTME 6 days ago | parent | prev [-]

But you can't trust flatbuffers sent from unknown senders.

motorest 6 days ago | parent | prev | next [-]

> Just with those two criteria you’re down to, like, six formats at most, of which Protocol Buffers is the most widely used.

What I dislike the most about blog posts like this is that, although the blogger is very opinionated and critical of many things, the post dates back to 2018, protobuf is still dominant, and apparently during all these years the blogger failed to put together something that they felt was a better way to solve the problem. I mean, it's perfectly fine if they feel strongly about a topic. However, investing so much energy to criticize and even throw personal attacks on whoever contributed to the project feels pointless and an exercise in self promotion at the expense of shit-talking. Either you put something together that you feel implements your vision and rights some wrongs, or don't go out of your day to put down people. Not cool.

ardit33 6 days ago | parent | next [-]

JSON exists, and when compressed it is pretty efficient. (not as efficient as protobuff though).

For client facing protocol Protobufs is a nightmare to use. For Machine to Machine services, it is ok-ish, yet personally I still don't like it.

When I was at Spotify we ditched it for client side apis (server to mobile/web), and never looked back. No one liked working with it.

motorest 6 days ago | parent [-]

> JSON exists (...)

The blog post leads with the personal assertion that "ad-hoc and built by amateurs". Therefore I doubt that JSON, a data serialization language designed by trimming most of JavaScript out and to be parses with eval(), would meet the opinionated high bar.

Also, JSON is a data interchange language, and has no support for types beyond the notoriously ill-defined primitives. In contrast, protobuf is a data serialization language which supports specifying types. This means that for JSON, to start to come close to meet the requirements met by protobuf, would need to be paired with schema validation frameworks and custom configurable parsers. Which it definitely does not cover.

ardit33 5 days ago | parent [-]

You must be young. XML and XML Schemas existed before JSON or Protobuf, and people ditched them for a good reason and JSON took over.

Protobuf is just another version of the old RPC/Java Beans, etc... of a binary format. Yes, it is more efficient data wise than JSON, but it is a PITA to work on and debug with.

motorest 5 days ago | parent [-]

> You must be young. XML and XML Schemas existed before JSON or Protobuf, and people ditched them for a good reason and JSON took over.

I'm not sure you got the point. It's irrelevant how old JSON or XML (a non sequitur) are. The point is that one of the main features and selling points of protobuf is strong typing and model validation implemented at the parsing level. JSON does not support any of these, and you need to onboard more than one ad-hoc tool to have a shot at feature parity, which goes against the blogger's opinionated position on the topic.

6 days ago | parent | prev | next [-]
[deleted]
6 days ago | parent | prev [-]
[deleted]
naikrovek 5 days ago | parent | prev | next [-]

TLV style binary formats are all you need. The “Type” in that acronym is a 32-bit number which you can use to version all of your stuff so that files are backwards compatible. Software that reads these should read all versions of a particular type and write only the latest version.

Code for TLV is easy to write and to read, which makes viewing programs easy. TLV data is fast for computers to write and to read.

Protobuf is overused because people are fucking scared to death to write binary data. They don’t trust themselves to do it, which is just nonsense to me. It’s easy. It’s reliable. It’s fast.

oftenwrong 5 days ago | parent [-]

Protobuf is typically serialised using a TLV-style encoding.

https://protobuf.dev/programming-guides/encoding/

A major value of protobuf is in its ecosystem of tools (codegen, lint, etc); it's not only an encoding. And you don't generally have to build or maintain any of it yourself, since it already exists and has significant industry investment.

mgaunard 7 days ago | parent | prev | next [-]

in the systems I built I didn't bother with backwards compatibility.

If you make any change, it's a new message type.

For compatibility you can coerce the new message to the old message and dual-publish.

o11c 7 days ago | parent | next [-]

I prefer a little builtin backwards (and forwards!) compatibility (by always enforcing a length for each object, to be zero-padded or truncated as needed), but yes "don't fear adding new types" is an important lesson.

jimbokun 7 days ago | parent | prev [-]

That only works if you control all the clients.

mgaunard 6 days ago | parent [-]

Dual-publishing makes it transparent to older clients.

Obviously you need to track when the old clients have been moved over so you can eventually retire the dual-publishing.

You could also do the conversion on the receiving side without a-priori information, but that would be extremely slow.

dlahoda 5 days ago | parent | prev | next [-]

https://github.com/dfinity/candid/blob/master/spec/Candid.md

orochimaaru 6 days ago | parent | prev | next [-]

Protobufs aren’t new. They’re really just rpc over https. I’ve used dce-rpc in 1997 which had IDL. I believe CORBA used IDL as well although I personally did not use it. There have been other attempts like ejb, etc. which are pretty much the same paradigm.

The biggest plus with protobuf is the social/financial side and not the technology side. It’s open source and free from proprietary hacks like previous solutions.

Apart from that, distributed systems of which rpc is a sub topic are hard in general. So the expectation would be that it sucks.

stickfigure 7 days ago | parent | prev | next [-]

Backwards compatibility is just not an issue in self-describing structures like JSON, Java serialization, and (dating myself) Hessian. You can add fields and you can remove fields. That's enough to allow seamless migrations.

It's only positional protocols that have this problem.

dangets 7 days ago | parent | next [-]

You can remove JSON fields at the cost of breaking your clients at runtime that expect those fields. Of course the same can happen with any deserialization libraries, but protobufs at least make it more explicit - and you may also be more easily able to track down consumers using older versions.

nomel 6 days ago | parent [-]

For the missing case, whenever I use json, I always start with a sane default struct, then overwrite those with the externally provided values. If a field is missing, it will be handled reasonably.

jimbokun 7 days ago | parent | prev [-]

At the cost of much larger payloads.

stickfigure 6 days ago | parent [-]

With gzip encoding... not really.

mkoubaa 6 days ago | parent | prev | next [-]

Real ones know that serialization is what sucks.

tomrod 7 days ago | parent | prev [-]

> Name another serialization declaration format that both (a) defines which changes can be make backwards-compatibly, and (b) has a linter that enforces backwards compatible changes.

ASCII text (tongue in cheek here)