| ▲ | conartist6 5 hours ago | |||||||
But those code points were just inputs to another stream transformation that turns a stream of code points into a stream of graphemes. Rapidly your advice turns into "just do everything in one giant transformation" and that loses the benefits of streams, which are meant to be highly composable to create efficient, multi-step transformation pipelines. | ||||||||
| ▲ | idle_zealot 5 hours ago | parent | next [-] | |||||||
What's stopping you from implementing a stream transformation that reads the raw stream like a parser, outputting a grapheme or whatever unit you want only when it knows it's done reading it from the input? | ||||||||
| ▲ | Joker_vD 5 hours ago | parent | prev [-] | |||||||
No, it doesn't turn into this. Those two bytes of leftovers plus a flag are kept inside the stream generator that transforms bytes into code points, every time you pull it those two bytes are used as an initial accumulator in the fold that takes the chunk of bytes and yield chunk of code points and the updated accumulator. You don't need to inline it all into one giant transform. Come on, it's how (mature libraries of) parser combinators work. The only slightly tricky part here is detecting leftover data in the pipeline. | ||||||||
| ||||||||