Remix.run Logo
zozbot234 4 days ago

Rust &str and String are specifically intended for UTF-8 valid text. If you're working with arbitrary byte sequences, that's what &[u8] and Vec<u8> are for in Rust. It's not a "mess", it's just different from what Golang does.

gf000 4 days ago | parent | next [-]

If anything that will make Rust programs likely to be correct under any strange text input, while Go might just handle the happy path of ASCII inputs.

Stuff like this matters a great deal on the standard library level.

4 days ago | parent [-]
[deleted]
maxdamantus 4 days ago | parent | prev [-]

It's never been clear to me where such a type is actually useful. In what cases do you really need to restrict it to valid UTF-8?

You should always be able to iterate the code points of a string, whether or not it's valid Unicode. The iterator can either silently replace any errors with replacement characters, or denote the errors by returning eg, `Result<char, Utf8Error>`, depending on the use case.

All languages that have tried restricting Unicode afaik have ended up adding workarounds for the fact that real world "text" sometimes has encoding errors and it's often better to just preserve the errors instead of corrupting the data through replacement characters, or just refusing to accept some inputs and crashing the program.

In Rust there's bstr/ByteStr (currently being added to std), awkward having to decide which string type to use.

In Python there's PEP-383/"surrogateescape", which works because Python strings are not guaranteed valid (they're potentially ill-formed UTF-32 sequences, with a range restriction). Awkward figuring out when to actually use it.

In Raku there's UTF8-C8, which is probably the weirdest workaround of all (left as an exercise for the reader to try to understand .. oh, and it also interferes with valid Unicode that's not normalized, because that's another stupid restriction).

Meanwhile the Unicode standard itself specifies Unicode strings as being sequences of code units [0][1], so Go is one of the few modern languages that actually implements Unicode (8-bit) strings. Note that at least two out of the three inventors of Go also basically invented UTF-8.

[0] https://www.unicode.org/versions/Unicode16.0.0/core-spec/cha...

> Unicode string: A code unit sequence containing code units of a particular Unicode encoding form.

[1] https://www.unicode.org/versions/Unicode16.0.0/core-spec/cha...

> Unicode strings need not contain well-formed code unit sequences under all conditions. This is equivalent to saying that a particular Unicode string need not be in a Unicode encoding form.

empath75 3 days ago | parent | next [-]

> It's never been clear to me where such a type is actually useful. In what cases do you really need to restrict it to valid UTF-8?

Because 99.999% of the time you want it to be valid and would like an error if it isn't? If you want to work with invalid UTF-8, that should be a deliberate choice.

maxdamantus 3 days ago | parent [-]

Do you want grep to crash when your text file turned out to have a partially written character in it? 99.999% seems very high, and you haven't given an actual use case for the restriction.

empath75 3 days ago | parent | next [-]

Rust doesn't crash when it gets an error unless you tell it to. You make a choice how to handle the error because you have to it or it won't compile. If you don't care about losing information when reading a file, you can use the lossy function that gracefully handles invalid bytes.

gf000 3 days ago | parent | prev [-]

Crash? No. But I can safely handle the error where it happens, because the language actually helps me with this situation by returning a proper Result type. So I have to explicitly check which "variant" I have, instead of forgetting to call the validate function in case of go.

xyzzyz 3 days ago | parent | prev [-]

The way Rust handles this is perfectly fine. String type promises its contents are valid UTF-8. When you create it from array of bytes, you have three options: 1) ::from_utf8, which will force you to handle invalid UTF-8 error, 2) ::from_utf8_lossy, which will replace invalid code points with replacement character code point, and 3) from_utf8_unchecked, which will not do the validity check and is explicitly marked as unsafe.

maxdamantus 3 days ago | parent [-]

But there's no option to just construct the string with the invalid bytes. 3) is not for this purpose; it is for when you already know that it is valid.

If you use 3) to create a &str/String from invalid bytes, you can't safely use that string as the standard library is unfortunately designed around the assumption that only valid UTF-8 is stored.

https://doc.rust-lang.org/std/primitive.str.html#invariant

> Constructing a non-UTF-8 string slice is not immediate undefined behavior, but any function called on a string slice may assume that it is valid UTF-8, which means that a non-UTF-8 string slice can lead to undefined behavior down the road.

gf000 3 days ago | parent | next [-]

How could any library function work with completely random bytes? Like, how would it iterate over code points? It may want to assume utf8's standard rules and e.g. know that after this byte prefix, the next byte is also part of the same code point (excuse me if I'm using wrong terminology), but now you need complex error handling at every single line, which would be unnecessary if you just made your type represent only valid instances.

Again, this is the same simplistic, vs just the right abstraction, this just smudges the complexity over a much larger surface area.

If you have a byte array that is not utf-8 encoded, then just... use a byte array.

kragen 3 days ago | parent [-]

There are a lot of operations that are valid and well-defined on binary strings, such as sorting them, hashing them, writing them to files, measuring their lengths, indexing a trie with them, splitting them on delimiter bytes or substrings, concatenating them, substring-searching them, posting them to ZMQ as messages, subscribing to them as ZMQ prefixes, using them as keys or values in LevelDB, and so on. For binary strings that don't contain null bytes, we can add passing them as command-line arguments and using them as filenames.

The entire point of UTF-8 (designed, by the way, by the group that designed Go) is to encode Unicode in such a way that these byte string operations perform the corresponding Unicode operations, precisely so that you don't have to care whether your string is Unicode or just plain ASCII, so you don't need any error handling, except for the rare case where you want to do something related to the text that the string semantically represents. The only operation that doesn't really map is measuring the length.

xyzzyz 3 days ago | parent | next [-]

> There are a lot of operations that are valid and well-defined on binary strings, such as (...), and so on.

Every single thing you listed here is supported by &[u8] type. That's the point: if you want to operate on data without assuming it's valid UTF-8, you just use &[u8] (or allocating Vec<u8>), and the standard library offers what you'd typically want, except of the functions that assume that the string is valid UTF-8 (like e.g. iterating over code points). If you want that, you need to convert your &[u8] to &str, and the process of conversion forces you to check for conversion errors.

maxdamantus 3 days ago | parent | next [-]

The problem is that there are so many functions that unnecessarily take `&str` rather than `&[u8]` because the expectation is that textual things should use `&str`.

So you naturally write another one of these functions that takes a `&str` so that it can pass to another function that only accepts `&str`.

Fundamentally no one actually requires validation (ie, walking over the string an extra time up front), we're just making it part of the contract because something else has made it part of the contract.

kragen 3 days ago | parent [-]

It's much worse than that—in many cases, such as passing a filename to a program on the Linux command line, correct behavior requires not validating, so erroring out when validation fails introduces bugs. I've explained this in more detail in https://news.ycombinator.com/item?id=44991638.

kragen 3 days ago | parent | prev [-]

That's semantically okay, but giving &str such a short name creates a dangerous temptation to use it for things such as filenames, stdio, and command-line arguments, where that process of conversion introduces errors into code that would otherwise work reliably for any non-null-containing string, as it does in Go. If it were called something like ValidatedUnicodeTextSlice it would probably be fine.

adastra22 3 days ago | parent | next [-]

I'd agree if it was &[bytes] or whatever. But &[u8] isn't that much different from &str.

kragen 3 days ago | parent [-]

Isn't &[u8] what you should be using for command-line arguments and filenames and whatnot? In that case you'd want its name to be short, like &[u8], rather than long like &[bytes] or &[raw_uncut_byte] or something.

adastra22 3 days ago | parent [-]

OsStr/OsString is what you would use in those circumstances. Path/PathBuf specifically for filenames or paths, which I think uses OsStr/OsString internally. I've never looked at OsStr's internals but I wouldn't be surprised if it is a wrapper around &[u8].

Note that &[u8] would allow things like null bytes, and maybe other edge cases.

kragen 3 days ago | parent [-]

You can't get null bytes from a command-line argument. And going by https://news.ycombinator.com/item?id=44991638 it's common to not use OsString when accepting command-line arguments, because std::env::args yields Strings, which means that probably most Rust programs that accept filenames on the command line have this bug.

adastra22 3 days ago | parent [-]

Rust String can contain null bytes! Rust uses explicit string lengths. Agree though that most OS wouldn't be able to pass null bytes in arguments though.

kragen 3 days ago | parent [-]

Right, but it can't contain invalid UTF-8, which is valid in both command-line parameters and in filenames on Linux, FreeBSD, and other normal Unixes. See my link above for a demonstration of how this causes bugs in Rust programs.

xyzzyz 3 days ago | parent | prev [-]

It's actually extremely hard to introduce problems like that, precisely because Rust's standard library is very well designed. Can you give an example scenario where it would be a problem?

kragen 3 days ago | parent [-]

Well, for example, the extremely exotic scenario of passing command-line arguments to a program on little-known operating systems like Linux and FreeBSD; https://doc.rust-lang.org/book/ch12-01-accepting-command-lin... recommends:

  use std::env;

  fn main() {
      let args: Vec<String> = env::args().collect();
      ...
  }
When I run this code, a literal example from the official manual, with this filename I have here, it panics:

    $ ./main $'\200'
    thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: "\x80"', library/std/src/env.rs:805:51
    note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
($'\200' is bash's notation for a single byte with the value 128. We'll see it below in the strace output.)

So, literally any program anyone writes in Rust will crash if you attempt to pass it that filename, if it uses the manual's recommended way to accept command-line arguments. It might work fine for a long time, in all kinds of tests, and then blow up in production when a wild file appears with a filename that fails to be valid Unicode.

This C program I just wrote handles it fine:

  #include <unistd.h>
  #include <fcntl.h>
  #include <stdio.h>
  #include <stdlib.h>

  char buf[4096];

  void
  err(char *s)
  {
    perror(s);
    exit(-1);
  }

  int
  main(int argc, char **argv)
  {
    int input, output;
    if ((input = open(argv[1], O_RDONLY)) < 0) err(argv[1]);
    if ((output = open(argv[2], O_WRONLY | O_CREAT, 0666)) < 0) err(argv[2]);
    for (;;) {
      ssize_t size = read(input, buf, sizeof buf);
      if (size < 0) err("read");
      if (size == 0) return 0;
      ssize_t size2 = write(output, buf, (size_t)size);
      if (size2 != size) err("write");
    }
  }
(I probably should have used O_TRUNC.)

Here you can see that it does successfully copy that file:

    $ cat baz
    cat: baz: No such file or directory
    $ strace -s4096 ./cp $'\200' baz
    execve("./cp", ["./cp", "\200", "baz"], 0x7ffd7ab60058 /* 50 vars */) = 0
    brk(NULL)                               = 0xd3ec000
    brk(0xd3ecd00)                          = 0xd3ecd00
    arch_prctl(ARCH_SET_FS, 0xd3ec380)      = 0
    set_tid_address(0xd3ec650)              = 4153012
    set_robust_list(0xd3ec660, 24)          = 0
    rseq(0xd3ecca0, 0x20, 0, 0x53053053)    = 0
    prlimit64(0, RLIMIT_STACK, NULL, {rlim_cur=9788*1024, rlim_max=RLIM64_INFINITY}) = 0
    readlink("/proc/self/exe", ".../cp", 4096) = 22
    getrandom("\xcf\x1f\xb7\xd3\xdb\x4c\xc7\x2c", 8, GRND_NONBLOCK) = 8
    brk(NULL)                               = 0xd3ecd00
    brk(0xd40dd00)                          = 0xd40dd00
    brk(0xd40e000)                          = 0xd40e000
    mprotect(0x4a2000, 16384, PROT_READ)    = 0
    openat(AT_FDCWD, "\200", O_RDONLY)      = 3
    openat(AT_FDCWD, "baz", O_WRONLY|O_CREAT, 0666) = 4
    read(3, "foo\n", 4096)                  = 4
    write(4, "foo\n", 4)                    = 4
    read(3, "", 4096)                       = 0
    exit_group(0)                           = ?
    +++ exited with 0 +++
    $ cat baz
    foo
The Rust manual page linked above explains why they think introducing this bug by default into all your programs is a good idea, and how to avoid it:

> Note that std::env::args will panic if any argument contains invalid Unicode. If your program needs to accept arguments containing invalid Unicode, use std::env::args_os instead. That function returns an iterator that produces OsString values instead of String values. We’ve chosen to use std::env::args here for simplicity because OsString values differ per platform and are more complex to work with than String values.

I don't know what's "complex" about OsString, but for the time being I'll take the manual's word for it.

So, Rust's approach evidently makes it extremely hard not to introduce problems like that, even in the simplest programs.

Go's approach doesn't have that problem; this program works just as well as the C program, without the Rust footgun:

  package main

  import (
          "io"
          "log"
          "os"
  )

  func main() {
          src, err := os.Open(os.Args[1])
          if err != nil {
                  log.Fatalf("open source: %v", err)
          }

          dst, err := os.OpenFile(os.Args[2], os.O_CREATE|os.O_WRONLY, 0666)
          if err != nil {
                  log.Fatalf("create dest: %v", err)
          }

          if _, err := io.Copy(dst, src); err != nil {
                  log.Fatalf("copy: %v", err)
          }
  }
(O_CREATE makes me laugh. I guess Ken did get to spell "creat" with an "e" after all!)

This program generates a much less clean strace, so I am not going to include it.

You might wonder how such a filename could arise other than as a deliberate attack. The most common scenario is when the filenames are encoded in a non-Unicode encoding like Shift-JIS or Latin-1, followed by disk corruption, but the deliberate attack scenario is nothing to sneeze at either. You don't want attackers to be able to create filenames your tools can't see, or turn to stone if they examine, like Medusa.

Note that the log message on error also includes the ill-formed Unicode filename:

  $ ./cp $'\201' baz
  2025/08/22 21:53:49 open source: open ζ: no such file or directory
But it didn't say ζ. It actually emitted a byte with value 129, making the error message ill-formed UTF-8. This is obviously potentially dangerous, depending on where that logfile goes because it can include arbitrary terminal escape sequences. But note that Rust's UTF-8 validation won't protect you from that, or from things like this:

  $ ./cp $'\n2025/08/22 21:59:59 oh no' baz
  2025/08/22 21:59:09 open source: open 
  2025/08/22 21:59:59 oh no: no such file or directory
I'm not bagging on Rust. There are a lot of good things about Rust. But its string handling is not one of them.
anarki8 3 days ago | parent [-]

There might be potential improvements, like using OsString by default for `env::args()` but I would pick Rust's string handling over Go’s or C's any day.

kragen 3 days ago | parent [-]

It's reasonable to argue that C's string handling is as bad as Rust's, or worse.

gf000 3 days ago | parent | prev [-]

Then [u8] can surely implement those functions.

adastra22 3 days ago | parent | prev | next [-]

I don’t understand this complaint. (3) sounds like exactly what you are asking for. And yes, doing unsafe thing is unsafe.

maxdamantus 3 days ago | parent [-]

> I don’t understand this complaint. (3) sounds like exactly what you are asking for. And yes, doing unsafe thing is unsafe

You're meant to use `unsafe` as a way of limiting the scope of reasoning about safety.

Once you construct a `&str` using `from_utf8_unchecked`, you can't safely pass it to any other function without looking at its code and reasoning about whether it's still safe.

Also see the actual documentation: https://doc.rust-lang.org/std/primitive.str.html#method.from...

> Safety: The bytes passed in must be valid UTF-8.

xyzzyz 3 days ago | parent | prev [-]

> If you use 3) to create a &str/String from invalid bytes, you can't safely use that string as the standard library is unfortunately designed around the assumption that only valid UTF-8 is stored.

Yes, and that's a good thing. It allows every code that gets &str/String to assume that the input is valid UTF-8. The alternative would be that every single time you write a function that takes a string as an argument, you have to analyze your code, consider what would happen if the argument was not valid UTF-8, and handle that appropriately. You'd also have to redo the whole analysis every time you modify the function. That's a horrible waste of time: it's much better to:

1) Convert things to String early, and assume validity later, and

2) Make functions that explicitly don't care about validity take &[u8] instead.

This is, of course, exactly what Rust does: I am not aware of a single thing that &str allows you to do that you cannot do with &[u8], except things that do require you to assume it's valid UTF-8.

maxdamantus 3 days ago | parent [-]

> This is, of course, exactly what Rust does: I am not aware of a single thing that &str allows you to do that you cannot do with &[u8], except things that do require you to assume it's valid UTF-8.

Doesn't this demonstrate my point? If you can do everything with &[u8], what's the point in validating UTF-8? It's just a less universal string type, and your program wastes CPU cycles doing unnecessary validation.

matt_kantor 3 days ago | parent [-]

> except things that do require you to assume it's valid UTF-8

That's the point.

maxdamantus 3 days ago | parent [-]

But no one has demonstrated an actual operation that requires valid UTF-8. The reasoning is always circular: "I require valid UTF-8 because someone else requires valid UTF-8".

Eventually there should be an underlying operation which can only work on valid UTF-8, but that doesn't exist. UTF-8 was designed such that invalid data can be detected and handled, without affecting the meaning of valid subsequences in the same string.