| ▲ | StillBored 2 days ago | |
I've got one of those N100+10Gbit router devices with a handful of ports. It seems a pretty reasonable device with one of the router distros running on it, but it doesn't seem nearly as efficient as my ucg-fiber/route10 devices, and that wouldn't bother me except that I suspect the packet latency is significantly higher too. Those devices AFAIK have hardware programmable router chips, which means the forwarding is done 100% without the interaction of the main CPU, so there isn't any interrupt/polling/etc delays when a packet arrives, the header gets rewritten, the checksum verified and off it goes. Anyone actually measured this? I see a lot of bandwidth/etc style tests but few that can show the actual impact of enabling disabling deep packet inspection and a few of the other metrics that I actually care about. Serve the home seems to have gotten some fancy test HW but they don't seem to be running these kinds of tests yet. | ||
| ▲ | rayiner 18 hours ago | parent | next [-] | |
The hardware-based routers have low latency. Fortigate advertises under 5 usec forwarding latency for its routers. Linux kernel forwarding is on the order of 10s of usec. However, under 100 usec of latency is negligible over a WAN link, where you're talking ~5 msec latency even on a fast fiber link. The downside of hardware routing is the lack of flexibility and some performance cliffs. On the consumer grade hardware routers in particular, connection setup is handled by a low-power ARM CPU. You have limits on the number of flows you can accelerate in hardware at a time, etc. I've got a 10G fiber connection, and I swapped out a Fortigate 100F for a server running VyOS. I had performance problems, because the 10G to 1G transition caused dropped packets at the switch. I was able to solve it by shaping the traffic to the 1G devices to handle queuing in the router, which is something this particular Fortigate can't do. (High end routers have algorithms like WRED designed to get TCP to behave nicely on 10G to 1G drops, but I don't want the noise of a Cisco in my basement.) | ||
| ▲ | zrail a day ago | parent | prev | next [-] | |
From what I can tell you're pretty much right. A linux bridge cannot possibly be as efficient or speedy as a dedicated switch asic. OpenWRT has support for a few different hardware switch kernel APIs, but you can't exactly buy one of those on a PCIe card and I've never seen one of those N100-class boards with one instead of a set of i226 ethernet controllers taking most of the PCIe lanes. Mikrotik sells the CCR2004-1G-2XS-PCIe, which is a fascinating device: https://mikrotik.com/product/ccr2004_1g_2xs_pcie It is a full Mikrotik router stripped down to just a board and hung off a PCIe interface. Iirc by default it exposes a virtual gigabit interface to the host and otherwise acts exactly like a CCR2004 running RouterOS. Doesn't really buy you anything vs a RB5009 unless you can use the pair of 25Gbps ports, but it sure is neat. | ||
| ▲ | HexPhantom 21 hours ago | parent | prev [-] | |
It's less about "hardware is always lower latency" and more about when the fast path stays enabled vs when you fall off it | ||