Remix.run Logo
vandot 3 days ago

They didn’t use goroutines, which is explains the poor perf. https://github.com/matttomasetti/Go-Gorilla_Websocket-Benchm...

Also, this paper is from Feb 2021.

windlep 3 days ago | parent | next [-]

I was under the impression that the underlying net/http library uses a new goroutine for every connection, so each websocket gets its own goroutine. Or is there somewhere else you were expecting goroutines in addition to the one per connection?

donjoe 3 days ago | parent [-]

Which is perfectly fine. However, you will be able to process only a single message per connection at once.

What you would do in go is:

- either a new goroutine per message

- or installing a worker pool with a predefined goroutine size accepting messages for processing

jand 3 days ago | parent [-]

Another option is to have a read-, and a write-pump goroutine associated with each gorilla ws client. I found this useful for gateways wss <--> *.

initplus 3 days ago | parent | prev [-]

http.ListenAndServe is implemented under the hood with a new goroutine per incoming connection. You don't have to explicitly use goroutines here, it's the default behaviour.

necrobrit 3 days ago | parent [-]

Yes _however_ the nodejs benchmark at least is handling each message asynchronously, whereas the go implementation is only handling connections asynchronously.

The client fires off all the requests before waiting for a response: https://github.com/matttomasetti/NodeJS_Websocket-Benchmark-... so the comparison isn't quite apples to apples.

Edit to add: looks like the same goes for the c++ and rust implementations. So I think what we might be seeing in this benchmark (particularly the node vs c++ since it is the same library) is that asynchronously handling each message is beneficial, and the go standard libraries json parser is slow.

Edit 2: Actually I think the c++ version is async for each message! Dont know how to explain that then.

josephg 2 days ago | parent [-]

Well, tcp streams are purely sequential. It’s the ideal use case for a single process, since messages can’t be received out of order. There’s no computational advantage to “handling each message asynchronously” unless the message handling code itself does IO or something. And that’s not the responsibility of the websocket library.

necrobrit 2 days ago | parent [-]

Good point!