| ▲ | js4ever 2 days ago |
| I don't think so, but my guess is raw performance rarely matters in the real world. I once explored this, hitting around 125K RPS per core on Node.js. Then I realized it was pointless, the moment you add any real work (database calls, file I/O, etc.), throughput drops below 10K RPS. |
|
| ▲ | rivetfasten 2 days ago | parent | next [-] |
| It's always a matter of chasing the bottleneck. It's fair to say that network isn't the bottleneck for most applications. Heuristically, if you're willing to take on the performance impacts of a GC'd language you're probably already not the target audience. Zero copy is the important part for applications that need to saturate the NIC. For example Netflix integrated encryption into the FreeBSD kernel so they could use sendfile for zero-copy transfers from SSD (in the case of very popular titles) to a TLS stream. Otherwise they would have had two extra copies of every block of video just to encrypt it. Note however that their actual streaming stack is very different from the application stack. The constraint isn't strictly technical: ISP colocation space is expensive, so they need to have the most juiced machines they can possibly fit in the rack to control costs. There's an obvious appeal to accomplishing zero-copy by pushing network functionality into user space instead of application functionality into kernel space, so the DPDK evolution is natural. |
| |
| ▲ | pclmulqdq 2 days ago | parent [-] | | TCP is generally zero-copy now. Zero-copy with io_uring is also possible. AF_XDP is also another way to do high-performance networking in the kernel, and it's not bad. DPDK still has a ~30% advantage over an optimized kernel-space application with a huge maintenance burden. A lot of people reach for it, though, without optimizing kernel interfaces first. |
|
|
| ▲ | antoinealb 2 days ago | parent | prev | next [-] |
| The goal of this kind of system is not to replace the application server. This is intended to work on the data plane where you do simple operations but do them many time per second. Think things like load balancers, cache server, routers, security appliances, etc. In this space Kernel Bypass is still very much the norm if you want to get an efficient system. |
| |
| ▲ | eqvinox 2 days ago | parent | next [-] | | > In this space Kernel Bypass is still very much the norm if you want to get an efficient system. Unless you can get an ASIC to do it, then the ASIC is massively preferrable; just the power savings generally¹ end the discussion. (= remove most routers from the list; also some security appliances and load balancers.) ¹ exceptions confirm the rule, i.e. small/boutique setups | | |
| ▲ | gonzopancho 2 days ago | parent [-] | | ASICs require years to develop and aren’t flexible once deployed | | |
| ▲ | eqvinox 2 days ago | parent | next [-] | | You don't develop an ASIC to run a router with, you buy one off the shelf. And the function of a router doesn't exactly change day by day (or even year by year). | | |
| ▲ | nsteel 15 hours ago | parent | next [-] | | My colleagues are always writing new features for our edge and core router ASICs released more than 10 years ago. They ship new software versions multiple times a year. It is highly specialised work and the customer requesting the feature has to be big enough to make it worth-while, but our silicon is flexible enough to avoid off-loading to slow CPUs in many cases. You get what you pay for. | |
| ▲ | ZephyrP a day ago | parent | prev [-] | | Change keeps coming, even when the wire format of a protocol has ossified. I've spent years in security and router performance at Cisco, wrote a respectable fraction of the flagship's L3 and L2-L3 (tun) firewall. I merged a patch on this tried-and-true firewall just this year; it's now deployed. As vendors are eager to remind us, custom silicon to accelerate everything between L1 to L7 exists. That said, it is still the case in 2025 that the "fast path" data-plane will end up passing either nothing or everything in a flow to the "slow path" control-plane, where the most significant silicon is less 'ASIC' and more 'aarch64'. This is all to say that the GP's comments are broadly correct. | | |
| |
| ▲ | nsteel 2 days ago | parent | prev [-] | | Even the ones supporting things like P4? |
|
| |
| ▲ | baruch 2 days ago | parent | prev [-] | | We do storage systems and use DPDK in the application, when the network IS the bottleneck it is worth it. Saturating two or three 400gbps NICs is possible with DPDK and the right architecture that makes the network be the bottleneck. |
|
|
| ▲ | jandrewrogers 2 days ago | parent | prev [-] |
| Storage and database doesn’t have to be that slow, that’s just architecture. I have database servers doing 10M RPS each, which absolutely will stress the network. We just do the networking bits a bit differently now. DPDK was a product of its time. |
| |