Remix.run Logo
anonymoushn 5 days ago

it is frustrating that the post opens by describing latency and then saying that it is called throughput.

littlestymaar 5 days ago | parent [-]

In a single-task setting (the situation described in the intro) throughput and latency are just the inverse of one another (in the mathematical sense of “inverse”: throughput = nb task per seconds = 1/time taken to process the task = 1/latency).

They only diverge when you consider multiple tasks.

derriz 5 days ago | parent [-]

That’s not the way “latency” is commonly used in my experience.

Latency numbers always include queuing time - so the measures are not related or derivable from each other.

A process might have a throughput of 1 million jobs per second but if the average size of the queue is 10 million then your job latency is going to be 10 seconds on average and not 1 microsecond.

littlestymaar 3 days ago | parent [-]

There's no queue if you only have one task…