| ▲ | epicprogrammer 6 hours ago | |||||||
It’s an interesting throwback to SEDA, but physically passing file descriptors between different cores as a connection changes state is usually a performance killer on modern hardware. While it sounds elegant on a whiteboard to have a dedicated 'accept' core and a 'read' core, you end up trading a slightly simpler state machine for massive L1/L2 cache thrashing. Every time you hand off that connection, you immediately invalidate the buffers and TCP state you just built up. There’s a reason the industry largely settled on shared-nothing architectures like NGINX having a single pinned thread handle the entire lifecycle of a request keeps all that data strictly local to the CPU cache. When you're trying to scale, respecting data locality almost always beats pipeline cleanliness. | ||||||||
| ▲ | kev009 4 hours ago | parent | next [-] | |||||||
Well, kernels grown some support for steering accept() to worker thread directly. For instance SO_REUSE_PORT (Linux)/SO_REUSE_PORT_LB (FreeBSD). | ||||||||
| ▲ | toast0 6 hours ago | parent | prev | next [-] | |||||||
You could presumably have an acceptor thread per core, which passes the fds to core alligned next thread, etc. That would get you the code simplicity benefits the article suggests, while keeping the socket bound to a single core, which is definitely needed. Depending on if you actually need to share anything, you could do process per core, thread per loop, and you have no core to core communication from the usual workings of the process (i/o may cross though) | ||||||||
| ||||||||
| ▲ | vlovich123 4 hours ago | parent | prev [-] | |||||||
While I agree that shared nothing wipes the pants performance-wise of shared state, surely the penalty you've outlined is only for super short lived connections? For longer lived connections the cache is going to thrash on an inevitable context switch anyway (either do to needing to wait for more I/O or normal preemption). As long as processing of I/O is handled on a given core, I don't know if there is actually such a huge benefit. A single pinned thread for the entire lifecycle has the problem that you get latency bottlenecks under load where two CPU-heavy requests end up contending for the same core vs work stealing making use of available compute. The ultimate benefit would be if you could arrange each core to be given a dedicated NIC. Then the interrupts for the NIC are arriving on the core that's processing each packet. But otherwise you're already going to have to wake up the NIC on a random core to do a cross-core delivery of the I/O data. TLDR: It's super complex to get a truly shared nothing approach unless you have a single application and you correctly allocate the work. It's really hard to solve generically optimally for all possible combinations of request and processing patterns. | ||||||||