▲ | anonymoushn 3 days ago | |||||||
Really great. I wonder, for the "types encoded as code" approach, is there any benefit to fast paths for data with fields in ascending order? For some json parsers with types encoded as code I have observed some speedup from either hard-coding a known key order or assuming keys in some order and providing a fallback in case an unexpected key is encountered. For users who are stuck with protobuf forever because of various services using it and various data being encoded this way, the historical data could plausibly be canonicalized and written back in large chunks when it is accessed, so that one need not pay the entire cost of canonicalizing it all at once. But of course the icache concerns are still just as bad. | ||||||||
▲ | haberman 3 days ago | parent [-] | |||||||
This kind of "expected next field" optimization has a long history in protobuf, but results are mixed. The generated code in C++ used to check for the expected next field before falling back to the switch() (example here: https://github.com/protocolbuffers/protobuf/blob/460e7dd7c47...) but this was removed in 2016 when load tests found that it hurt more than it helped. One tricky part of making this optimization work is making a good guess about what the next field should be. Miguel's article alludes to this: > Each field specifies which fields to try next. This allows the compiler to perform field scheduling, by carefully deciding which order to try fields in based both on their declaration order and a rough estimation of their “hotness”, much like branch scheduling happens in a program compiler. This avoids almost all of the work of looking up the next field in the common case, because we have already pre-loaded the correct guess. > I haven’t managed to nail down a good algorithm for this yet, but I am working on a system for implementing a type of “branch prediction” for PGO, that tries to provide better predictions for the next fields to try based on what has been seen before. One delightful thing about the tail-call parser design is that the CPU's branch predictor effectively takes over the job of guessing. With a tail call parser, the dispatch sequence ends up looking like this:
That "jmp rax" instruction is an indirect branch that can be predicted by the CPU. The CPU effectively guesses for us!And unlike any kind of guess we might have performed ahead-of-time, the branch predictor will constantly adapt to whatever patterns it is seeing in the data. This is good, because statically guessing ahead-of-time is hard. | ||||||||
|