▲ | llimllib 5 days ago | ||||||||||||||||
Kind of! This script is assuming that you're dealing with a byte slice, which means you've already encoded your unicode data. If you just encoded your string to bytes naïvely, it will probably-mostly still work, but it will get some combining characters wrong if they're represented differently in the two sources you're comparing. (eg, e-with-an-accent-character vs. accent-combining-character+e) If you want to be correct-er you'll normalize your UTF string[1], but note that there are four different defined ways to do this, so you'll need to choose the one that is the best tradeoff for your particular application and data sources. [1]: https://en.wikipedia.org/wiki/Unicode_equivalence#Normalizat... | |||||||||||||||||
▲ | codethief 5 days ago | parent | next [-] | ||||||||||||||||
> If you just encoded your string to bytes naïvely By "naïvely" I assume you mean you would just plug in UTF-8 bytestrings for haystack & needle, without adjusting the implementation? Wouldn't the code still need to take into account where characters (code points) begin and end, though, in order to prevent incorrect matches? | |||||||||||||||||
| |||||||||||||||||
▲ | jiehong 5 days ago | parent | prev [-] | ||||||||||||||||
Thanks for this detailed answer! |