| |
| ▲ | qcnguy 5 days ago | parent | next [-] | | Go programs are normally running on the server side and are often proprietary, so you can't see the bugs exist to exploit them. It's not something like Chrome where someone can spend weeks finding a race and exploiting it to get a bug bounty, with full visibility of the source code and ability to develop an exploit in the lab. | |
| ▲ | qualeed 5 days ago | parent | prev | next [-] | | This seems to be operating on the premise that everyone knows of every exploit that exists. There's also an enormous incentive to hide working exploits. | | | |
| ▲ | comex 5 days ago | parent | prev | next [-] | | You’re a cryptography person. So you know that most theoretically interesting cryptography vulnerabilities, even the ones that are exploitable in PoCs, are too obscure and/or difficult to get used by actual attackers. Same goes for hardware vulnerabilities. Rowhammer and speculative execution attacks are often shown to be able to corrupt and leak memory, respectively, but AFAIK there are no famous cases of them actually being used to attack someone. Partly because they’re fiddly; partly out of habit. Partly because if you’re in a position to satisfy the requirements for those attacks – you have copies of all the relevant binaries so you know the memory layout of code and data, you have some kind of sandboxed arbitrary code execution to launch the attack from – then you’re often able to find better vulnerabilities elsewhere. And the same is also true for certain types of software vulnerabilities… Honestly, forget about Go: when was the last time you heard of a modern application backend being exploited through memory corruption, in any language? I know that Google and Meta and the like use a good amount of C++ on the server, as do many smaller companies. That C++ code may skew ‘modern’ and safer, but you could say the same about newly-developed client-side C++ code that’s constantly getting exploited. So where are the server-side attacks? Part of the answer is probably that they exist, but I don’t know about them because they haven’t been disclosed. Unlike client-side attacks, server-side attacks usually target a single entity who has little incentive to publish deep dives into how they were attacked. That especially applies to larger companies, which tend to use more C++. But we do sometimes see those deep dives written anyway, and the vulnerabilities described usually aren’t memory safety related. So I think there is also a gap in actual exploitation. Which probably has a number of causes, but I’d guess they include attackers (1) usually not having ready access to binaries, (2) not having an equivalent to the browser as a powerful launching point for exploits, and (3) not having access to as much memory-unsafe code as on the client side. This is relevant to Go because of course Go is usually used on the server side. There is some use of Go on the client side, but I can’t think offhand of a single example of it being used in the type of consumer OS or client-side application that typically gets attacked. Meanwhile, Go is of course much safer than C++. To make exploitation possible in Go, not only do you need a race condition (which are rarely targeted by exploits in any language), you also need a very specific code pattern. I’m not sure exactly how specific. I know how a stereotypical example of an interface pointer/viable mismatch works. But are there other options? I hear that maps are also thread-unsafe in general? I’d need to dig into the implementation to see how likely that is to be exploitable. Regardless, the potential exists. If memory safety is a “threshold test” as you say, then Go is not memory-safe. I agree though that the point would best be proven with a PoC of exploiting a real Go program. As someone with experience writing exploits, I think I could probably locate a vulnerability and create an exploit, if I had a few months to work on it. But for now I have employment and my free time is taken up by other things. | | |
| ▲ | amluto 5 days ago | parent | next [-] | | > when was the last time you heard of a modern application backend being exploited through memory corruption, in any language? It happens all the time, but it’s a bit hard to find because “modern application backend[s]” are usually written in Go or Python or Rust. Even so, you’ll find plenty of exploits based on getting a C or C++ library on the backend to parse a malformed file. | | | |
| ▲ | tptacek 5 days ago | parent | prev [-] | | There's lots of clientside Go, too! | | |
| ▲ | comex 5 days ago | parent [-] | | Where? Within, as I said, “the type of consumer OS or client-side application that typically gets attacked”. It has to be a component of either a big application or a big OS, or something with comparable scope. Otherwise it would not likely be targeted by real-world memory corruption attacks (that we hear about) no matter the language. At least that’s my impression. | | |
| ▲ | tptacek 5 days ago | parent [-] | | I'm sure I could come up with a bunch of examples but the first thing that jumps into my head is the Docker ecosystem. | | |
| ▲ | comex 4 days ago | parent | next [-] | | Yeah, that’s not nearly the level of big I was thinking of. It’s not a browser or WhatsApp or Word. Admittedly, Go is popular among developers. And there are some public examples of client-side attacks targeting developers and security researchers specifically. Such attacks could hypothetically go after something like Docker. But, searching now, every single example I can find seems to either exploit a non-developer-specific target (browser, iMessage, Acrobat), or else not exploit anything and just rely on convincing people to execute a Trojan (often by sending a codebase that executes the Trojan when you build it). That bifurcation actually surprises me and I’m not sure what to conclude from it, other than “build systems are insecure by design”. But at any rate, the lack of Go exploits doesn’t say much if we don’t see exploits of developer tools written in C either. | | |
| ▲ | tptacek 4 days ago | parent [-] | | We routinely do see those exploits! | | |
| ▲ | comex 3 days ago | parent [-] | | Are you talking about private examples or do you have one to share? | | |
| ▲ | tptacek 3 days ago | parent [-] | | Sure, I mean, take for example git. More broadly: a lot of people mouthing off about how thread safety issues make Go unsafe, but you're one of a small minority of commenters here who could just find something and POC it. How hard do you think that would be? I'd absolutely accept a controlled-environment serverside RCE. |
|
|
| |
| ▲ | ameliaquining 5 days ago | parent | prev [-] | | I would say that Go is common in command-line developer tooling, which is sort of client-side albeit a noncentral example of same (since it includes tools for running servers and suchlike), and rare in all other client-side domains that I can think of. |
|
|
|
| |
| ▲ | afdbcreid 5 days ago | parent | prev [-] | | I'm not sure that's correct. Yes, this is an enormous effort to construct exploits, but constructing exploits for C/C++ code is much much easier and gives not less, or even more, benefit. Therefore it makes sense the efforts are focused on that. If/when most C/C++ code in the world will be gone, I assume we'll see more exploits of Go code. | | |
| ▲ | lossolo 5 days ago | parent [-] | | I can show you a trivial POC in C/C++ where someone opens a socket and ends up with a buffer overflow or UAF, both cases leading to memory corruption due to sloppy programming, and both easily exploitable for RCE. Can you show me any reasonable proof of concept (without using unsafe etc.) in Go that leads to similar memory corruption and is exploitable for RCE? | | |
| ▲ | ameliaquining 5 days ago | parent [-] | | https://blog.stalkr.net/2022/01/universal-go-exploit-using-d... This example hardcodes the payload, but (unless I've badly misunderstood how the exploit works) that's not necessary, it could instead be input from the network (and you wouldn't have to pass that input to any APIs that are marked unsafe). The payload is just hardcoded so that the example could be reproduced on the public Go Playground, which sandboxes the code it runs and so can't accept network input. Note that what tptacek is asking for is more stringent than this; he wants a proof-of-concept exploitation of a memory safety vulnerability caused by the data-race loopholes in the Go memory model, in a real program that someone is running in production. I do think it's interesting that nobody has demonstrated that yet, but I'm not sure what it tells us about how sure we can be that those vulnerabilities don't exist. | | |
| ▲ | lossolo 5 days ago | parent [-] | | Yeah, it looks like CTF like POC, not what I would call reasonable code by any measure: https://github.com/StalkR/misc/blob/master/go/gomium/exploit... The tight goroutine loop that flips one variable between two different struct types just to win a race is not something a typical developer writes on purpose.
The trick to "defeat" compiler optimizations by assigning to a dummy variable inside an inline function.
Carefully computing the address difference between two slices to reach out of bounds, then using that to corrupt another slice’s header.
I mean calling mprotect and jumping to shellcode is outright exploit engineering, not business logic and it's not part of the attackers payload. Chances of exact PoC pattern showing up in the wild by accident is basically zero. |
|
|
|
|