| ▲ | jacquesm 9 hours ago |
| > And then memory expanded so much that all kinds of “optimal” patterns for programming just become nearly irrelevant. I don't think that ever happened. Using relatively sparse amount of memory turns into better cache management which in turn usually improves performance drastically. And in embedded stuff being good with memory management can make the difference between 'works' and 'fail'. |
|
| ▲ | zeta0134 8 hours ago | parent | next [-] |
| The need to use optimal patterns didn't go away, but the techniques certainly did. Just as a quick example, it's usually a bad idea now to use lookup tables to accelerate small math workloads. The lookup table creates memory pressure on the cache, which ends up degrading performance on modern systems. Back in the 1980s, lookup tables were by far the dominant technique because math was *slow.* |
| |
| ▲ | zozbot234 3 hours ago | parent | next [-] | | > Back in the 1980s, lookup tables were by far the dominant technique because math was slow. This actually generalizes in a rather clean way: compared to the 1980s, you now want to cheaply compress data in memory and use succinct representations as much as practicable, since the extra compute involved in translating a more succinct representation into real data is practically free compared to even one extra cacheline fetch from RAM (which is now hundreds of cycles latency, and in parallel code often has surprisingly low throughput). | | |
| ▲ | QuadmasterXLII 2 hours ago | parent [-] | | It’s a mad word where ultimate performance in one problem can require compressing data in ram and in another storing it uncompressed on disc. | | |
| ▲ | bonesss 2 hours ago | parent [-] | | The same atmosphere that makes bread hard makes crackers soft. |
|
| |
| ▲ | jacquesm 3 hours ago | parent | prev [-] | | The way to approach this is to benchmark and then pick the best solution. |
|
|
| ▲ | _fizz_buzz_ 4 hours ago | parent | prev | next [-] |
| It obviously never became completely irrelevant. But I think programmers spend a lot less time thinking about memory than they used to. People used to do a lot of gymnastics and crazy optimizations to fit stuff into memory. I do quite a bit of embedded programming and most of the time it seems easier for me to simply upgrade the MCU and spend 10cents more (or whatever) than to make any crazy optimimzations. But of course there are still cases where it makes sense. |
|
| ▲ | yread 7 hours ago | parent | prev [-] |
| When was the last time you used mergesort because you had to? |
| |
| ▲ | jacquesm 3 hours ago | parent [-] | | Coincidentially, last night, and I'm not pulling your leg! But to be fair that's the first time in much more than a decade. I don't normally work with such huge files and this was one very rare exception. I also nearly crashed my machine by triggering the OOM killer after naively typing 'vi file' without first checking how large it had become. I'm working on a project that I probably should run on a more serious machine but I don't feel like moving my whole work environment from the laptop that I normally use. |
|