Remix.run Logo
jeffbee 6 days ago

Interesting that it is taken on faith that unix sockets are faster than inet sockets.

eqvinox 5 days ago | parent | next [-]

That's because it's logical that implementing network capable segmentation and flow control is more costly than just moving data with internal, native structures. And looking up random benchmarks yields anything from equal performance to 10x faster for Unix domain.

bluGill 5 days ago | parent [-]

It wouldn't surprise me if inet sockets were more optimized though and so unix sockets ended up slower anyway just because nobody has bothered to make them good (which is probably why some of your benchmarks show equal performance). Benchmarks are important.

sgtnoodle 5 days ago | parent | next [-]

I've spent several years optimizing a specialized IPC mechanism for a work project. I've spent time reviewing the Linux Kernel's unix socket source code to understand obscure edge cases. There isn't really much to optimize - it's just copying bytes between buffers. Most of the complexity of the code has to do with permissions and implementing the ability to send file descriptors. All my benchmarks have unambiguously showed unix sockets to be more performant than loopback TCP for my particular use case.

eqvinox 5 days ago | parent | prev [-]

I agree, but practically speaking they're used en masse all across the field and people did bother to make them good [enough]. I suspect the benchmarks where they come up equal are cases where things are limited by other factors (e.g. syscall overhead), though I don't want to make unfounded accusations :)

yetanotherdood 5 days ago | parent | prev | next [-]

Unix Domain Sockets are the standard mechanism for app->sidecar communication at Google (ex: Talking to the TI envelope for logging etc.)

jeffbee 5 days ago | parent | next [-]

Search around on Google Docs for my 2018 treatise/rant about how the TI Envelope was the least-efficient program anyone had ever deployed at Google.

eqvinox 5 days ago | parent | next [-]

Ok, now it sounds like you're blaming unix sockets for someone's shitty code...

No idea what "TI Envelope" is, and a Google search doesn't come up with usable results (oh the irony...) - if it's a logging/metric thing, those are hard to get to perform well regardless of socket type. We ended up using batching with mmap'd buffers for crash analysis. (I.e. the mmap part only comes in if the process terminates abnormally, so we can recover batched unwritten bits.)

jeffbee 5 days ago | parent [-]

> Ok, now it sounds like you're blaming unix sockets for someone's shitty code...

No, I am just saying that the unix socket is not Brawndo (or maybe it is?), it does not necessarily have what IPCs crave. Sprinkling it into your architecture may or may not be relevant to the efficiency and performance of the result.

eqvinox 5 days ago | parent [-]

Sorry, what's brawndo? (Searching only gives me movie results?)

We started out discussing AF_UNIX vs. AF_INET6. If you can conceptually use something faster than sockets that's great, but if you're down to a socket, unix domain will generally beat inet domain...

sgtnoodle 5 days ago | parent | next [-]

You can do some pretty crazy stuff with pipes, if you want to do better than unix sockets.

exe34 5 days ago | parent | prev [-]

it's what plants crave! it's got electrolytes.

yetanotherdood 5 days ago | parent | prev [-]

I'm a xoogler so I don't have access. Do you have a TL;DR that you can share here (for non-Googlers)?

ithkuil 5 days ago | parent | prev [-]

servo's Ipc-channel doesn't use Unix domain sockets to move data. It uses it to share a memfd file descriptor effectively creating a memory buffer shared between two processes

dangoodmanUT 6 days ago | parent | prev | next [-]

Are there resources suggesting otherwise?

pjmlp 5 days ago | parent | prev | next [-]

As often in computing, profiling is a foreign word.

aoeusnth1 5 days ago | parent | prev [-]

Tell me more, I know nothing about IPC