▲ | kentonv 6 days ago | ||||||||||||||||
> how often? as practiced by who, and where? This was my experience in Google Search infrastructure circa 2005-2010. This was a system with dozens of teams and hundreds of developers all pushing their data through a common message bus. It happened all the damned time and caused multiple real outages (from overzealous validation), along with a lot of tech debt involving having to initialize fields with dummy data because they weren't used anymore but still required. Reports from other large teams at google, e.g. gmail, indicated they had the same problems. > Nice, "explain to me how you're going to implement a backward-compatible SUM in the spec-parser that doesn't have the notions needed. Ha! You can't! Told you so!" Sure sure, we could expand the type system to support some way of adding a new tag to every element of the repeated oneof, implement it, teach everyone how that works, etc. Or we could just tell people to wrap the thing in a `message`. It works fine already, and everyone already understands how to do it. No new cognitive load is created, and no time is wasted chasing theoretical purity that provides no actual real-world benefit. | |||||||||||||||||
▲ | instig007 4 days ago | parent [-] | ||||||||||||||||
> This was my experience in Google Search infrastructure circa 2005-2010 [...] > Reports from other large teams at google > teach everyone how that works, etc. > Or we could just tell people to wrap the thing in a `message` It really sounds like a self-inflicted internal google issue. Can you address the part where I mention isomorphism of (oneof token) and (oneof (token {})), and clarify what exactly do you think you'd have to teach other engineers to do, if your protocol's encoders and decoders took this property into account? | |||||||||||||||||
|