Remix.run Logo
wrs 4 hours ago

Indeed, as I used to tell my ops colleagues when they pointed to RAM utilization graphs, "we paid for all of that RAM, why aren't we using it?"

jcelerier 3 hours ago | parent | next [-]

because memory access performance is not O(1) but depends on the size of what's in memory (https://www.ilikebigbits.com/2014_04_21_myth_of_ram_1.html). Every byte used makes the whole thing slower.

rileymat2 7 minutes ago | parent | next [-]

I am not following, isn't this just a graph that shows that how fast operations happen is largely dependent on the odds that it is in cache at various levels (CPU/Ram/Disk)?

The memory operation itself is O(1), around 100 ns, where at a certain point we are doing full ram fetches each time because the odds of it being in CPU cache are low?

Typically O notation is an upper bound, and it holds well there.

That said, due to cache hits, the lower bound is much lower than that.

You see similar performance degradation if you iterate in a double sided array the in the wrong index first.

yunnpp 4 minutes ago | parent | prev | next [-]

> Every byte used makes the whole thing slower.

This is an incorrect conclusion to make from the link you posted in the context of this discussion. That post is a very long-winded way of saying that the average speed of addressing N elements depends on N and the size of the caches, which isn't news to anyone. Key word: addressing.

SkiFire13 11 minutes ago | parent | prev | next [-]

Memory access performance depends on the _maximum size of memory you need to address_. You can clearly see it in the graph of that article where L1, L2, L3 and RAM are no longer enough to fit the linked list. However while the working set fits in them the performance scales much better. So as long as you give priority to the working set, you can fill the rest of the biggest memory with whatever you want without affecting performance.

stevefan1999 11 minutes ago | parent | prev [-]

why is it not O(1)? It has to service within a deadline time, so it is still constant.

jasonfarnon 2 hours ago | parent | prev [-]

do you also say that about hdd space? about money in the bank?

coldtea 22 minutes ago | parent | next [-]

Why he wouldn't say it about HDD space? You buy HDD to keep them empty?

And as for the money analogy, what's the idea there, that memory grows interest? Or that it's better to put your money in the bank and leave it there, as opposed to buy assets or stocks, and of course, pay for food, rent, and stuff you enjoy?

astafrig an hour ago | parent | prev | next [-]

> about money in the bank?

Yes, generally. That's the entire idea behind the stock market.

groundzeros2015 an hour ago | parent | prev [-]

It’s counterintuitive but I learned this best by playing RTS games. If you don’t spend money your opponent can outdo you on the map by simply spending their money. But the principle extends, everything you have doing nothing (buildings units etc) is losing. The most efficient process is to have all your resources working for you at all times.

LaGrange 27 minutes ago | parent [-]

> It’s counterintuitive but I learned this best by playing RTS games. If you don’t spend money your opponent can outdo you on the map by simply spending their money.

OK, hear me out over here:

We are not in an RTS.

Edit: in real-world settings lacking redundancy tends to make systems incredibly fragile, in a way that just rarely matters in an RTS. Which we are _not in_.