| ▲ | dapperdrake 4 hours ago | |
First of all, thank you for presenting a succinct take on this viewpoint from the other side of the fence from where I am at. So how can I learn from this? (Asking very aggressively, especially for Internet writing, to make the contrast unmistakable. And contrast helps with perceiving differences and mistakes.) (You also don’t owe me any of your time or mental bandwidth, whatsoever.) So here goes: Question 1: How come "speed", "performance", race conditions and st_ino keep getting brought up? Speed (latency), physically writing things out to storage (sequentially, atomically (ACID), all of HDD NVME SSD ODD FDD tape, "haskell monad", event horizons, finite speed of light and information, whatever) as well as race conditions all seem to boil down to the same thing. For reliable systems like accounting the path seems to be ACID or the highway. And "unreliable" systems forget fast enough that computers don’t seem to really make a difference there. Question 2: Does throughput really matter more than latency in everyday application? Question 3 (explanation first, this time): The focus on inode numbers is at least understandable with regards to the history of C and unix-like operating systems and GNU coreutils. What about this basic example? Just make a USB thumb drive "work" for storing files (ignoring nand flash decay and USB). Without getting tripped up in libc IO buffering, fflush, kernel buffering (Hurd if you prefer it over Linux or FreeBSD), more than one application running on a multi-core and/or time-sliced system (to really weed out single-core CPUs running only a single user-land binary with blocking IO). | ||
| ▲ | dijit an hour ago | parent | next [-] | |
> Does throughput really matter more than latency in everyday application? In my experience latency and throughput are intrinsically linked unless you have the buffer-space to handle the throughput you want. Which you can't guarantee on all the systems where GNU Coreutils run. | ||
| ▲ | awesome_dude 19 minutes ago | parent | prev [-] | |
> Question 2: > Does throughput really matter more than latency in everyday application? IME as a user, hell yes Getting a video I don't mind if it buffers a moment, but once it starts I need all of that data moving to my player as quickly as possible OTOH if there's no wait, but the data is restricted (the amount coming to my player is less than the player needs to fully render the images), the video is "unwatchable" | ||