| ▲ | mgaunard 6 hours ago |
| Are those performance measurements meant be impressive? Seems on par with something threwn around with Python in 5 minutes. |
|
| ▲ | dang 6 hours ago | parent | next [-] |
| Please don't be a jerk or put down others' work on HN. That's not the kind of site we're trying to be. You're welcome to make your substantive points thoughtfully, of course. https://news.ycombinator.com/newsguidelines.html https://news.ycombinator.com/showhn.html |
| |
| ▲ | mgaunard 6 hours ago | parent [-] | | Pointing out facts is not being a jerk. If you don't want feedback, don't solicit it. Also if you disapprove, modding down is enough, you don't need to start a meta-discussion thread, which is itself a discouraged practice. |
|
|
| ▲ | Aydarbek 6 hours ago | parent | prev [-] |
| Totally fair, if this were “single-node HTTP handler on localhost”, then yeah, you can hit big numbers quickly in many stacks. The point of these numbers is the envelope: 3-node consensus (Raft), real network (not loopback), and sync-majority writes (ACK after 2/3 replicas) plus the crash/recovery semantics (SIGKILL → restart → offsets/data still there). If you have a quick Python setup that does majority-acked replication + fast crash recovery with similar measurements, I’d honestly love to compare apples-to-apples happy to share exact scripts/config and run the same test conditions. |
| |
| ▲ | mgaunard 6 hours ago | parent [-] | | Good NICs get data out in a microsecond or two. That's still off by quite the order of magnitude, but that could be up to the network topology in question. | | |
| ▲ | hedgehog 5 hours ago | parent [-] | | Durable consensus means this is waiting for confirmed write to disk on a majority of nodes, it will always be much slower than the time it takes a NIC to put bits on the wire. That's the price of durability until someone figures out a more efficient way. | | |
| ▲ | mgaunard 4 hours ago | parent [-] | | A NVMe disk write is 20 microseconds. | | |
| ▲ | hedgehog an hour ago | parent [-] | | I'm not sure if you're going out of your way to be a dick or just obtuse but 1) that's not true on most SSDs, 2) there's overhead with all the indirection on a Digital Ocean droplet, and 3) this is obviously a straight forward user space implementation that's going to have all kinds of scheduler overhead. I'm not sure who it's for but it seems to make some reasonable trades for simplicity. |
|
|
|
|