| ▲ | tptacek 5 days ago |
| This is a canard. What's happening here, as happens so often in other situations, is that a term of art was created to describe something complicated; in this case, "memory safety", to describe the property of programming languages that don't admit to memory corruption vulnerabilities, such as stack and heap overflows, use-after-frees, and type confusions. Later, people uninvolved with the popularization of the term took the term and tried to define it from first principles, arriving at a place different than the term of art. We saw the same thing happen with "zero trust networking". The fact is that Go doesn't admit memory corruption vulnerabilities, and the way you know that is the fact that there are practically zero exploits for memory corruption vulnerabilities targeting pure Go programs, despite the popularity of the language. Another way to reach the same conclusion is to note that this post's argument proves far too much; by the definition used by this author, most other higher-level languages (the author exempts Java, but really only Java) also fail to be memory safe. Is Rust "safer" in some senses than Go? Almost certainly. Pure functional languages are safer still. "Safety" as a general concept in programming languages is a spectrum. But "memory safety" isn't; it's a threshold test. If you want to claim that a language is memory-unsafe, POC || GTFO. |
|
| ▲ | kllrnohj 5 days ago | parent | next [-] |
| > in this case, "memory safety", to describe the property of programming languages that don't admit to memory corruption vulnerabilities, such as [..] type confusions > The fact is that Go doesn't admit memory corruption vulnerabilities Except it does. This is exactly the example in the article. Type confusion causes it to treat an integer as a pointer & deference it. This then trivially can result in memory corruption depending on the value of the integer. In the example the value "42" is used so that it crashes with a nice segfault thanks to lower-page guarding, but that's just for ease of demonstration. There's nothing magical about the choice of 42 - it could just as easily have been any number in the valid address space. |
| |
| ▲ | dboreham 5 days ago | parent [-] | | Everyone knows that there's something very magical about the choice of 42. |
|
|
| ▲ | Sharlin 5 days ago | parent | prev | next [-] |
| > to describe the property of programming languages that don't admit to memory corruption vulnerabilities, such as stack and heap overflows, use-after-frees, and type confusions. And data races allow all of that. There cannot be memory-safe languages supporting multi-threading that admit data races that lead to UB. If Go does admit data races it is not memory-safe. If a program can end up in a state that the language specification does not recognize (such as termination by SIGSEGV), it’s not memory safe. This is the only reasonable definition of memory safety. |
| |
| ▲ | tptacek 5 days ago | parent [-] | | If that were the case, you'd be able to support the argument with evidence. | | |
| ▲ | chowells 5 days ago | parent [-] | | You mean like the program in the article where code that never dereferences a non-pointer causes the runtime to dereference a non-pointer? That seems like evidence to me. | | |
| ▲ | tptacek 5 days ago | parent [-] | | An exploit against a real Go program that relies on memory corruption. | | |
| ▲ | sophacles 5 days ago | parent | next [-] | | I think your security background is coloring your perception of the term memory safety. Specifically the requirement that the various issues lead to exploitation. These issues can lead to many other issues that are not vulnerability in the security sense, e.g. data corruption, incorrect (but not insecure) behavior, performance issues, and more. I don't think any of those were ever dismissed or excluded from memory safety discussion. Infosec circles tend to evaluate most ideas in the context of (anti)exploitation, and the rest of programming tends to focus on what the cool kids argue (that is they often weigh security concerns higher than other issues as well), so the other problems caused by double-free or buffer overruns (etc) just may not have been given as much weight in your mind. | | |
| ▲ | tptacek 5 days ago | parent [-] | | "Memory safety" is a security term, not a PLT term. | | |
| ▲ | SkiFire13 5 days ago | parent | next [-] | | That's a statement without source and even if it was widely accepted as true it doesn't imply the fact that something needs to be exploitable to be considered a security issue. We are full of CVEs without a known way to be exploited. | | | |
| ▲ | Sharlin 4 days ago | parent | prev [-] | | First off, in any and every engineering discipline it would be absurd to claim that "safety" only means security against intentional malice. Second, the burden of proof goes the other way. It’s absurd to claim that UB is safe unless proven otherwise. Unsafety must obviously be the default assumption. |
|
| |
| ▲ | afdbcreid 5 days ago | parent | prev | next [-] | | It should be possible to construct an exploit for such programs. But even for truly unsafe languages, vulnerabilities just from data races are very rare, because they are much harder to exploit. You could argue Go is safe from memory vulnerabilities, and that'll be 99% correct (we can't know what will happen if some very strong organization (e.g. a nation-state actor) will heavily invest in exploiting some Go program), but it still isn't memory safe, as per the definition in Wikipedia: > Memory safety is the state of being protected from various software bugs and security vulnerabilities when dealing with memory access, such as buffer overflows and dangling pointers. | | |
| ▲ | tptacek 5 days ago | parent [-] | | There's enormous incentive to construct those exploits. Why don't they exist? | | |
| ▲ | qcnguy 5 days ago | parent | next [-] | | Go programs are normally running on the server side and are often proprietary, so you can't see the bugs exist to exploit them. It's not something like Chrome where someone can spend weeks finding a race and exploiting it to get a bug bounty, with full visibility of the source code and ability to develop an exploit in the lab. | |
| ▲ | qualeed 5 days ago | parent | prev | next [-] | | This seems to be operating on the premise that everyone knows of every exploit that exists. There's also an enormous incentive to hide working exploits. | | | |
| ▲ | comex 5 days ago | parent | prev | next [-] | | You’re a cryptography person. So you know that most theoretically interesting cryptography vulnerabilities, even the ones that are exploitable in PoCs, are too obscure and/or difficult to get used by actual attackers. Same goes for hardware vulnerabilities. Rowhammer and speculative execution attacks are often shown to be able to corrupt and leak memory, respectively, but AFAIK there are no famous cases of them actually being used to attack someone. Partly because they’re fiddly; partly out of habit. Partly because if you’re in a position to satisfy the requirements for those attacks – you have copies of all the relevant binaries so you know the memory layout of code and data, you have some kind of sandboxed arbitrary code execution to launch the attack from – then you’re often able to find better vulnerabilities elsewhere. And the same is also true for certain types of software vulnerabilities… Honestly, forget about Go: when was the last time you heard of a modern application backend being exploited through memory corruption, in any language? I know that Google and Meta and the like use a good amount of C++ on the server, as do many smaller companies. That C++ code may skew ‘modern’ and safer, but you could say the same about newly-developed client-side C++ code that’s constantly getting exploited. So where are the server-side attacks? Part of the answer is probably that they exist, but I don’t know about them because they haven’t been disclosed. Unlike client-side attacks, server-side attacks usually target a single entity who has little incentive to publish deep dives into how they were attacked. That especially applies to larger companies, which tend to use more C++. But we do sometimes see those deep dives written anyway, and the vulnerabilities described usually aren’t memory safety related. So I think there is also a gap in actual exploitation. Which probably has a number of causes, but I’d guess they include attackers (1) usually not having ready access to binaries, (2) not having an equivalent to the browser as a powerful launching point for exploits, and (3) not having access to as much memory-unsafe code as on the client side. This is relevant to Go because of course Go is usually used on the server side. There is some use of Go on the client side, but I can’t think offhand of a single example of it being used in the type of consumer OS or client-side application that typically gets attacked. Meanwhile, Go is of course much safer than C++. To make exploitation possible in Go, not only do you need a race condition (which are rarely targeted by exploits in any language), you also need a very specific code pattern. I’m not sure exactly how specific. I know how a stereotypical example of an interface pointer/viable mismatch works. But are there other options? I hear that maps are also thread-unsafe in general? I’d need to dig into the implementation to see how likely that is to be exploitable. Regardless, the potential exists. If memory safety is a “threshold test” as you say, then Go is not memory-safe. I agree though that the point would best be proven with a PoC of exploiting a real Go program. As someone with experience writing exploits, I think I could probably locate a vulnerability and create an exploit, if I had a few months to work on it. But for now I have employment and my free time is taken up by other things. | | |
| ▲ | amluto 5 days ago | parent | next [-] | | > when was the last time you heard of a modern application backend being exploited through memory corruption, in any language? It happens all the time, but it’s a bit hard to find because “modern application backend[s]” are usually written in Go or Python or Rust. Even so, you’ll find plenty of exploits based on getting a C or C++ library on the backend to parse a malformed file. | | | |
| ▲ | tptacek 5 days ago | parent | prev [-] | | There's lots of clientside Go, too! | | |
| ▲ | comex 5 days ago | parent [-] | | Where? Within, as I said, “the type of consumer OS or client-side application that typically gets attacked”. It has to be a component of either a big application or a big OS, or something with comparable scope. Otherwise it would not likely be targeted by real-world memory corruption attacks (that we hear about) no matter the language. At least that’s my impression. | | |
| ▲ | tptacek 5 days ago | parent [-] | | I'm sure I could come up with a bunch of examples but the first thing that jumps into my head is the Docker ecosystem. | | |
| ▲ | comex 4 days ago | parent | next [-] | | Yeah, that’s not nearly the level of big I was thinking of. It’s not a browser or WhatsApp or Word. Admittedly, Go is popular among developers. And there are some public examples of client-side attacks targeting developers and security researchers specifically. Such attacks could hypothetically go after something like Docker. But, searching now, every single example I can find seems to either exploit a non-developer-specific target (browser, iMessage, Acrobat), or else not exploit anything and just rely on convincing people to execute a Trojan (often by sending a codebase that executes the Trojan when you build it). That bifurcation actually surprises me and I’m not sure what to conclude from it, other than “build systems are insecure by design”. But at any rate, the lack of Go exploits doesn’t say much if we don’t see exploits of developer tools written in C either. | | |
| ▲ | tptacek 4 days ago | parent [-] | | We routinely do see those exploits! | | |
| ▲ | comex 3 days ago | parent [-] | | Are you talking about private examples or do you have one to share? | | |
| ▲ | tptacek 3 days ago | parent [-] | | Sure, I mean, take for example git. More broadly: a lot of people mouthing off about how thread safety issues make Go unsafe, but you're one of a small minority of commenters here who could just find something and POC it. How hard do you think that would be? I'd absolutely accept a controlled-environment serverside RCE. |
|
|
| |
| ▲ | ameliaquining 5 days ago | parent | prev [-] | | I would say that Go is common in command-line developer tooling, which is sort of client-side albeit a noncentral example of same (since it includes tools for running servers and suchlike), and rare in all other client-side domains that I can think of. |
|
|
|
| |
| ▲ | afdbcreid 5 days ago | parent | prev [-] | | I'm not sure that's correct. Yes, this is an enormous effort to construct exploits, but constructing exploits for C/C++ code is much much easier and gives not less, or even more, benefit. Therefore it makes sense the efforts are focused on that. If/when most C/C++ code in the world will be gone, I assume we'll see more exploits of Go code. | | |
| ▲ | lossolo 5 days ago | parent [-] | | I can show you a trivial POC in C/C++ where someone opens a socket and ends up with a buffer overflow or UAF, both cases leading to memory corruption due to sloppy programming, and both easily exploitable for RCE. Can you show me any reasonable proof of concept (without using unsafe etc.) in Go that leads to similar memory corruption and is exploitable for RCE? | | |
| ▲ | ameliaquining 5 days ago | parent [-] | | https://blog.stalkr.net/2022/01/universal-go-exploit-using-d... This example hardcodes the payload, but (unless I've badly misunderstood how the exploit works) that's not necessary, it could instead be input from the network (and you wouldn't have to pass that input to any APIs that are marked unsafe). The payload is just hardcoded so that the example could be reproduced on the public Go Playground, which sandboxes the code it runs and so can't accept network input. Note that what tptacek is asking for is more stringent than this; he wants a proof-of-concept exploitation of a memory safety vulnerability caused by the data-race loopholes in the Go memory model, in a real program that someone is running in production. I do think it's interesting that nobody has demonstrated that yet, but I'm not sure what it tells us about how sure we can be that those vulnerabilities don't exist. | | |
| ▲ | lossolo 5 days ago | parent [-] | | Yeah, it looks like CTF like POC, not what I would call reasonable code by any measure: https://github.com/StalkR/misc/blob/master/go/gomium/exploit... The tight goroutine loop that flips one variable between two different struct types just to win a race is not something a typical developer writes on purpose.
The trick to "defeat" compiler optimizations by assigning to a dummy variable inside an inline function.
Carefully computing the address difference between two slices to reach out of bounds, then using that to corrupt another slice’s header.
I mean calling mprotect and jumping to shellcode is outright exploit engineering, not business logic and it's not part of the attackers payload. Chances of exact PoC pattern showing up in the wild by accident is basically zero. |
|
|
|
|
| |
| ▲ | Sharlin 5 days ago | parent | prev | next [-] | | That’s called "moving the goal posts". A definition of memory safety that permits unsoundness as long as nobody has exploited said unsoundness is not a definition that anyone serious about security is going to accept. Unsoundness is unsoundness, undefined behavior is undefined behavior. The conservative stance is that once execution hits UB, anything can happen. | |
| ▲ | tialaramex 5 days ago | parent | prev | next [-] | | It's just a little airborne, it's still good https://www.youtube.com/watch?v=1XIcS63jA3w | |
| ▲ | amluto 5 days ago | parent | prev | next [-] | | https://github.com/golang/go/issues/34902 https://www.cloudfoundry.org/blog/cve-2020-15586/ I don’t see any evidence that anyone wrote an RCE exploit for this, but I also don’t see any evidence of anyone even trying to rule it out. | | |
| ▲ | tptacek 5 days ago | parent [-] | | What about this particular bug do you think makes it likely to be exploitable? I'm not asking you to write an RCE POC, just to tell a story of the sequence of events involving this bug that results in attacker-controlled code. What does the attacker control here, and how do they use that control to divert execution? | | |
| ▲ | amluto 5 days ago | parent [-] | | As a general heuristic, a corrupted data structure in a network server results in RCE. This is common in languages like C and C++. On first glance, it looks like the bug can (at least) result in the server accessing a slice object where the various fields don’t all come from the same place. So the target server can end up accessing some object out of bounds (or as the wrong type or both), which can easily end up writing some data (possibly attacker controlled) to an inappropriate place. In standard attack, the attacker might try to modify the stack or a function pointer to set up a ROP chain or something similar, which is close enough to arbitrarily code to eventually either corrupt something to directly escalate privileges or to do appropriate syscalls to actually execute code. | | |
| ▲ | tptacek 5 days ago | parent | next [-] | | No, that doesn't work. Lots of (maybe even most) corrupted data structures aren't exploitable (past DOS). Where does the attacker-controlled data come from. What path does it take to get to where the attacker wants it to go. You have to be able to answer those two questions. | | |
| ▲ | amluto 5 days ago | parent [-] | | The Internet is full of nice articles of people bragging about their RCE exploits that start with single-byte overruns or seemingly-weak type confusions, etc. > Where does the attacker-controlled data come from. The example I gave was an HTTP server. Attackers can shove in as much attacker-controlled data as they want. They can likely do something like a heap by using many requests or many headers. Unless the runtime zeroes freed memory (and frees it immediately, which GC languages like Go often don’t do), then lots of attacker controlled data will stick around. And, for all I know, the slice that gets mixed up in this bug is fully attacker controlled! In any event, I think this whole line of reasoning is backwards. Developers should assume that a memory safety error is game over unless there is a very strong reason to believe otherwise — assume full RCE, ability to read and write all in-process data, the ability to issue any syscall, and the ability to try to exploit side channels. Maybe very strong mitigations like hardware-assisted CFI will change this, and maybe not. |
| |
| ▲ | ameliaquining 5 days ago | parent | prev [-] | | I looked at the code, and unless I've misunderstood it, this bug can't corrupt the slice in the sense of allowing accesses outside the designated allocation or anything like that, because the slice variable is only written to once, when the writer is initialized, so there can't be racy accesses to it. The contents of the slice can potentially be corrupted, but that's just arbitrary bytes, so not a memory safety violation. The line I'm not quite as sure about is https://go.googlesource.com/go/+/refs/tags/go1.13.1/src/bufi.... That assignment is to a variable of interface type, so in theory it could cause memory corruption if multiple goroutines executed it concurrently on the same receiver, which was possible until the bug was fixed. That said, I cannot immediately think of a way to exploit this; you can only write error values corresponding to errors that you can make occur while writing to the socket, and that's a much more constrained set of possible values than the arbitrary bytes that can occur in a buffer. And for that, you only get confusion among the types of those particular errors. It might be possible but it at least looks challenging. |
|
|
| |
| ▲ | gf000 5 days ago | parent | prev | next [-] | | Hide the same program into some dependency of a dependency and you have a nice little security vulnerability in your prod app. It's actually very easy to hide such a vulnerability as an innocent bug. | | |
| ▲ | ameliaquining 5 days ago | parent [-] | | If you're stipulating deliberately inserted vulnerabilities then there are much easier ways, e.g., with a plausibly-deniable logic bug in code that calls os/exec or reflect (both of which can execute arbitrary code by design). | | |
| ▲ | gf000 5 days ago | parent [-] | | If you see `exec`, that's an obvious point where you want to pay extra attention. Compare to an innocent looking map operation, and it's not even in the same league. | | |
| ▲ | ameliaquining 5 days ago | parent [-] | | What's the least suspicious-looking code that you think could facilitate remote code execution via data-race memory corruption? |
|
|
| |
| ▲ | nemothekid 5 days ago | parent | prev [-] | | This is no true Scotsman for programming languages. I could also argue C is memory safe and all the exploits that have been made weren’t real C programs |
|
|
|
|
|
| ▲ | jstarks 5 days ago | parent | prev | next [-] |
| > If you want to claim that a language is memory-unsafe, POC || GTFO. There's a POC right in the post, demonstrating type confusion due to a torn read of a fat pointer. I think it could have just as easily been an out-of-bounds write via a torn read of a slice. I don't see how you can seriously call this memory safe, even by a conservative definition. Did you mean POC against a real program? Is that your bar? |
| |
| ▲ | tptacek 5 days ago | parent [-] | | You need a non-contrived example of a memory-corrupting data race that gives attackers the ability to control memory, through type confusion or a memory lifecycle bug or something like it. You don't have to write the exploit but you have to be able to tell the story of how the exploit would actually work --- "I ran this code and it segfaulted" is not enough. It isn't even enough for C code! | | |
| ▲ | codys 5 days ago | parent [-] | | The post is a demonstration that a class of problems: causing Go to treat a integer field as a pointer and access the memory behind that pointer without using any of Go's documented "unsafe.Pointer" (or other documented as unsafe operations). We're talking about programming languages being memory safe (like fly.io does on it's security page [1]), not about other specific applications. It may be helpful to think of this as talking about the security of the programming language implementation. We're talking about inputs to that implementation that are considered valid and not using "unsafe" marked bits (though I do note that the Go project itself isn't very clear on if they claim to be memory-safe). Then we want to evaluate whether the programming language implementation fulfills what people think it fulfills; ie: "being a memory safe programming language" by producing programs under some constraints (ie: no unsafe) that are themselves memory-safe. The example we see in the OP is demonstrating a break in the expectations for the behavior of the programming language implementation if we expected the programming language implementation to produce programs that are memory safe (again under some conditions of not using "unsafe" bits). [1]: https://fly.io/docs/security/security-at-fly-io/#application... | | |
| ▲ | tptacek 5 days ago | parent [-] | | The thread you're commenting has already discussed everything this comment says. If you've got concerns about our security page, I think you should first take them to the ISRG Prossimo project. https://www.memorysafety.org/docs/memory-safety/ | | |
| ▲ | codys 5 days ago | parent [-] | | In this thread I linked the fly.io security page because it helps us establish that one can talk about _languages_ as being memory safe specifically, which is something it seems you're rejecting as a concept in the parent and other comments. (In a separate comment about "what do people claim about Go anyhow", I linked the memorysafety.org page, but I did not expect it to help in getting you to the understanding that we can evaluate programming languages as being memory safe or not, where something from the company where someone was a founder seemed more likely to get a person to reconsider the framing of what we're examining) | | |
| ▲ | tptacek 4 days ago | parent [-] | | Huh? No, I'm not. Go is a memory-safe programming language, like Java before it, like Python, Ruby, Javascript, and of course Rust. | | |
| ▲ | zozbot234 4 days ago | parent [-] | | So you're saying nobody cares about actual memory safety in concurrent code? Then why did the Swift folks bother to finally make the language memory-safe (just as safe as Rust) for concurrent code? Heck why did the Java folks bother to define their safe concurrency/memory model to begin with? They could have done it the Golang way and not cared about the issue. | | |
| ▲ | tptacek 4 days ago | parent [-] | | I don't know why you're inventing things for me to have said. |
|
|
|
|
|
|
|
|
| ▲ | ralfj 4 days ago | parent | prev | next [-] |
| > Another way to reach the same conclusion is to note that this post's argument proves far too much; by the definition used by this author, most other higher-level languages (the author exempts Java, but really only Java) also fail to be memory safe. This is wrong. I explicitly exempt Java, OCaml, C#, JavaScript, and WebAssembly. And I implicitly exempt everyone else when I say that Go is the only language I know of that has this problem. (I won't reply to the rest since we're already discussing that at https://news.ycombinator.com/item?id=44678566 ) |
|
| ▲ | weinzierl 5 days ago | parent | prev | next [-] |
| "What's happening here, as happens so often in other situations, is that a term of art was created to describe something complicated; [..] Later, people uninvolved with the popularization of the term took the term and tried to define it from first principles, arriving at a place different than the term of art." Happens all the time in math and physics but having centuries of experience with this issue we usually just slap the name of a person on the name of the concept. That is why we have Gaussian Curvature and Riemann Integrals. Maybe we should speak of Jung Memory Safety too. Thinking about it, the opposite also happens. In the early 19th century "group" had a specific meaning, today it has a much broader meaning with the original meaning preserved under the term "Galois Group". Or even simpler: For the longest time seconds were defined as fraction of a day and varied in length. Now we have a precise and constant definition and still call them seconds and not ISO seconds. |
|
| ▲ | lenkite 5 days ago | parent | prev | next [-] |
| How does Java "fail" to be memory safe by the definition used by the author ? Please give an example. |
|
| ▲ | empath75 5 days ago | parent | prev | next [-] |
| > Another way to reach the same conclusion is to note that this post's argument proves far too much; by the definition used by this author, most other higher-level languages (the author exempts Java, but really only Java) also fail to be memory safe. Yes I mean that was the whole reason they invented rust. If there were a bunch of performant memory safe languages already they wouldn't have needed to. |
|
| ▲ | johnnyjeans 5 days ago | parent | prev | next [-] |
| This is a good post and I agree with it in full, but I just wanted to point out that (safe) Rust is safer from data races than, say, Haskell due to the properties of an affine type system. Haskell in general is a much safer than Rust thanks to its more robust type system (which also forms the basis of its metaprogramming facilities), monads being much louder than unsafe blocks, etc. But data races and deadlocks are one of the few things Rust has over it. There are some pure functional languages that are dependently typed like Idris, and thus far safer than Rust, but they're in the minority and I've yet to find anybody using them industrially. Also Fortnite's Verse thing? I don't know how pure that language is though. |
| |
| ▲ | chowells 5 days ago | parent [-] | | I don't think it's true that Rust is safer, using the terminology from the article. Both languages prevent you from doing things that will result in safety violations unless you start mucking with unsafe internals. Rust absolutely does make it easier to write high-performance threaded code correctly, though. If your system depends on high amounts of concurrent mutation, Rust definitely makes it easier to write correct code. On the other hand, a system like STM in Haskell can make it easier to write complex concurrency logic correctly in Haskell than Rust, but it can have very bad performance overhead and needs to be treated with extreme suspicion in performance-sensitive code. It's a huge win for simple expression of complex concurrency, but you have to pay for it somewhere. It can be used in ways where that overhead is acceptable, but you absolutely need to be suspicious in a way that's never a concern in Rust. |
|
|
| ▲ | Mawr 4 days ago | parent | prev | next [-] |
| > The fact is that Go doesn't admit memory corruption vulnerabilities, and the way you know that is the fact that there are practically zero exploits for memory corruption vulnerabilities targeting pure Go programs, despite the popularity of the language. Another way to word it: If "Go is memory unsafe" is such a revelation after its been around for 13 years, it's more likely that such a statement is somehow wrong than that nobody's picked up on such a supposedly impactful safety issue in all this time. As such, the burden of proof that addresses why nobody's ran into any serious safety issues in the last 13 years is on the OP. It's not enough to show some theoretical program that exhibits the issue, clearly that is not enough to cause real problems. |
| |
| ▲ | zozbot234 4 days ago | parent [-] | | There's no "revelation" here, it's always been well known among experts that Go is not fully memory safe for concurrent code, same for previous versions of Swift. OP has simply spelled out the argument clearly and made it easier to understand for average developers. | | |
| ▲ | tptacek 4 days ago | parent [-] | | It's made what would be a valid point using misleading terminology and framing that suggests these are security issues, which they simply are not. "One could easily turn this example into a function that casts an integer to a pointer, and then cause arbitrary memory corruption." No, one couldn't! One has contrived a program that hardcodes precisely the condition one wants to achieve. In doing so, one hasn't even demonstrated even one of the two predicates for a memory corruption vulnerability (attacker control of the data, and attacker ability to place controlled data somewhere advantageous to the attacker). What the author is doing is demonstrating correctness advantages of Rust using inappropriate security framing. | | |
| ▲ | zozbot234 4 days ago | parent [-] | | > misleading terminology and framing that suggests these are security issues Could you quote where exactly OP has misleadingly "suggested" that these concerns lead to security issues in the typical case? > attacker control of the data, and attacker ability to place controlled data somewhere advantageous to the attacker Under this definition the Rowhammer problem with hardware DRAM does not qualify as a genuine security concern since it inherently relies on fiddly non-determinism that cannot possibly be "controlled" by any attacker. (The problem with possible torn writes in concurrent Go code is quite similar in spirit; it's understood that an actually observed torn write might only occur rarely.) Needless to say there is a fairly strong case for addressing these problems anyway, as a matter of defence in depth. > correctness advantages of Rust Memory safety in OP's sense is not exclusive to Rust. Swift has it. Even Java/C# cannot access arbitrary memory as a result of torn writes. It would be more accurate to say that OP has identified a correctness issue that's apparently exclusive to Go. | | |
| ▲ | tptacek 4 days ago | parent [-] | | I quoted directly from the article. | | |
| ▲ | zozbot234 4 days ago | parent [-] | | To use your definition, that quote is clearly making a point about correctness, not necessarily about real-world security. | | |
| ▲ | tptacek 4 days ago | parent [-] | | As long as we agree that there isn't a meaningful security implication, we don't need to keep litigating. |
|
|
|
|
|
|
|
| ▲ | elktown 4 days ago | parent | prev [-] |
| The older I get the more I just see these kinds of threads like I see politics: Exaggerate your "opponents" weaknesses, underplay/ignore its strengths and so on. So if something no matter how disproportionate can be construed to be, or be associate with, a current zeitgeist with a negative sentiment, it's an opportunity to gain ground. I really don't understand why people get so obsessed with their tools that it turns into a political battleground. It's a means to an end. Not the end itself. |