▲ | Memory Size Matters to PostgreSQL(pgdba.org) | |||||||||||||||||||||||||
59 points by PaulHoule 3 days ago | 12 comments | ||||||||||||||||||||||||||
▲ | rtp4me 2 days ago | parent | next [-] | |||||||||||||||||||||||||
Interesting read, but I find it very lacking. PGSQL memory consists of lots of parts; shared_buffers, work_mem, maintenance_work_men, OS Cache size, etc. For those who are interested in a deeper Postgresql tuning tutorial, I suggest this[1] as it highlights a number of important tuning options - specifically for PGSQL 17. Once your DB has been running for a while, run the postgresql-tuner app[2] to see if you have correctly applied RAM settings[2]. The repo has been around for a long time, but I find the recommendations (especially the estimated RAM usage) really good. [1] https://www.pgedge.com/blog/postgresql-performance-tuning | ||||||||||||||||||||||||||
| ||||||||||||||||||||||||||
▲ | wmf 2 days ago | parent | prev | next [-] | |||||||||||||||||||||||||
I don't fully understand this article but this point stuck out as probably fractally wrong: Modern DDR4 memories have a theoretical throughput of 25-30 GB/s. This is more realistically ranging between 5-10 GB/s. With a 100 GB full packed shared buffer the time required to perform one single full scan ranges between 3 and 20 seconds. Obviously DDR5 now exists and servers have multiple memory channels giving total memory bandwidth more like 200-500 GB/s. An old rule of thumb is that a computer should be able to read its entire memory in one second, although these days it may be more like 1-4 seconds. The clock replacement algorithm only needs to read metadata, so a full sweep of the metadata for 100 GB of buffers should be milliseconds not seconds. (If they're talking about a table scan instead then obviously reading from buffers is going to be faster than disk.) | ||||||||||||||||||||||||||
| ||||||||||||||||||||||||||
▲ | billvsme 2 days ago | parent | prev | next [-] | |||||||||||||||||||||||||
I often use pgtune to set the memory size, just select the correct information | ||||||||||||||||||||||||||
▲ | hyperman1 2 days ago | parent | prev | next [-] | |||||||||||||||||||||||||
I love working with postgres, but this is one array where it needs a lot more love. If I deploy MongoDB it takes the whole machine and does its thing. With postgres, you have extremely minimalistic defaults, and then you have to turn a zillion knobs before it does what it should. For beginners, it is a huge footgun, that makes people assume bad performance while evaluating. For the experienced PG admin, it is an annoiance and a time waster. Oh, the VM just gained 64GB RAM? PG will sit there and stare at it. Apart from that, basically everyone starts with the PG guidelines or a generated template(25% for this, 25% divided by number of sessions for that). Then you keep wondering how much performance you left on the table. | ||||||||||||||||||||||||||
▲ | itsthecourier 3 days ago | parent | prev | next [-] | |||||||||||||||||||||||||
for the rest of us: | ||||||||||||||||||||||||||
▲ | curtisszmania 2 days ago | parent | prev [-] | |||||||||||||||||||||||||
[dead] |