Remix.run Logo
floating-io 3 hours ago

You can have this problem with any kind of thread -- including OS threads -- if you do an unbounded spawn loop. Go is hardly unique in this.

Goroutines are actually better AFAIK because they distribute work on a thread pool that can be much smaller than the number of active goroutines.

If my quick skim created a correct understanding, then the problem here looks more like architecture. Put simply: does the memcached client really require a new TCP connection for every lookup? I would think you would pool those connections just like you would a typical database and keep them around for approximately forever. Then they wouldn't have spammed memcache with so many connections in the first place...

(edit: ah, it looks like they do use a pool, but perhaps the pool does not have a bounded upper size, which is its own kind of fail.)

slopinthebag 2 hours ago | parent [-]

Rust's async doesn't have this issue. Or at least, it's the same issue as malloc in an unbounded loop, but that's a more general issue not related to async or threading.

15-20 thousand futures would be trivial. 15-20 thousand goroutines, definitely not.

floating-io an hour ago | parent | next [-]

I don't know enough about rust to confirm or deny that -- but unless rust somehow puts a limit on in-flight async operations, I don't see how it would help.

The problem is not resource usage in go. The problem is that they created umpteen thousand TCP connections, which is going to kill things regardless of the language.

30 minutes ago | parent | prev [-]
[deleted]