Remix.run Logo
jeffbee 7 months ago

Interesting that it is taken on faith that unix sockets are faster than inet sockets.

eqvinox 7 months ago | parent | next [-]

That's because it's logical that implementing network capable segmentation and flow control is more costly than just moving data with internal, native structures. And looking up random benchmarks yields anything from equal performance to 10x faster for Unix domain.

bluGill 7 months ago | parent [-]

It wouldn't surprise me if inet sockets were more optimized though and so unix sockets ended up slower anyway just because nobody has bothered to make them good (which is probably why some of your benchmarks show equal performance). Benchmarks are important.

sgtnoodle 7 months ago | parent | next [-]

I've spent several years optimizing a specialized IPC mechanism for a work project. I've spent time reviewing the Linux Kernel's unix socket source code to understand obscure edge cases. There isn't really much to optimize - it's just copying bytes between buffers. Most of the complexity of the code has to do with permissions and implementing the ability to send file descriptors. All my benchmarks have unambiguously showed unix sockets to be more performant than loopback TCP for my particular use case.

eqvinox 7 months ago | parent | prev [-]

I agree, but practically speaking they're used en masse all across the field and people did bother to make them good [enough]. I suspect the benchmarks where they come up equal are cases where things are limited by other factors (e.g. syscall overhead), though I don't want to make unfounded accusations :)

yetanotherdood 7 months ago | parent | prev | next [-]

Unix Domain Sockets are the standard mechanism for app->sidecar communication at Google (ex: Talking to the TI envelope for logging etc.)

jeffbee 7 months ago | parent | next [-]

Search around on Google Docs for my 2018 treatise/rant about how the TI Envelope was the least-efficient program anyone had ever deployed at Google.

eqvinox 7 months ago | parent | next [-]

Ok, now it sounds like you're blaming unix sockets for someone's shitty code...

No idea what "TI Envelope" is, and a Google search doesn't come up with usable results (oh the irony...) - if it's a logging/metric thing, those are hard to get to perform well regardless of socket type. We ended up using batching with mmap'd buffers for crash analysis. (I.e. the mmap part only comes in if the process terminates abnormally, so we can recover batched unwritten bits.)

jeffbee 7 months ago | parent [-]

> Ok, now it sounds like you're blaming unix sockets for someone's shitty code...

No, I am just saying that the unix socket is not Brawndo (or maybe it is?), it does not necessarily have what IPCs crave. Sprinkling it into your architecture may or may not be relevant to the efficiency and performance of the result.

eqvinox 7 months ago | parent [-]

Sorry, what's brawndo? (Searching only gives me movie results?)

We started out discussing AF_UNIX vs. AF_INET6. If you can conceptually use something faster than sockets that's great, but if you're down to a socket, unix domain will generally beat inet domain...

sgtnoodle 7 months ago | parent | next [-]

You can do some pretty crazy stuff with pipes, if you want to do better than unix sockets.

zbentley 7 months ago | parent [-]

Sure, but setting up a piped session with a pre-existing sidecar daemon can be complicated. You either end up using named pipes (badly behaved clients can mess up other clients’ connections, one side has to do weird filesystem polling/watching for its accept(2) equivalent), or unnamed pipes via a Unix socket with fdpass (which needs careful handling to not mess up, and you’re using a Unix socket anyway, so why not use it for data instead?).

exe34 7 months ago | parent | prev [-]

it's what plants crave! it's got electrolytes.

yetanotherdood 7 months ago | parent | prev [-]

I'm a xoogler so I don't have access. Do you have a TL;DR that you can share here (for non-Googlers)?

ithkuil 7 months ago | parent | prev [-]

servo's Ipc-channel doesn't use Unix domain sockets to move data. It uses it to share a memfd file descriptor effectively creating a memory buffer shared between two processes

dangoodmanUT 7 months ago | parent | prev | next [-]

Are there resources suggesting otherwise?

pjmlp 7 months ago | parent | prev | next [-]

As often in computing, profiling is a foreign word.

aoeusnth1 7 months ago | parent | prev [-]

Tell me more, I know nothing about IPC