Remix.run Logo
Making Postgres slower(byteofdev.com)
351 points by AsyncBanana 4 days ago | 39 comments
vichyvich 4 days ago | parent | next [-]

I love this! It would be great to have follow-ups and series of books about how to make things worse, as a way to learn to make things better.

Maybe it could be like O’Reilly, except the covers could have shittily-drawn fantasy animals, e.g. a 7-year-old’s drawing of a unicorn with a head on each side of its body both talking on their AirPods giving away their money to scammers, making PowerPoint slides, eating too much, doing hard drugs, live-streaming on Facebook, and standing on the railroad tracks with a train in the distance.

don-code 4 days ago | parent | next [-]

This strategy was actually used during World War II, to ensure pilots could come home safely. Weather forecasting not being what it is today, meteorologists determined what conditions would result in the _most_ lives being lost, then together with mission commanders "designed" missions to simply not meet those conditions.

Source: https://medium.com/butwhatfor/suppose-i-wanted-to-kill-a-lot...

setr 4 days ago | parent [-]

The missile knows where it is because it knows where it is not…

https://youtu.be/bZe5J8SVCYQ?si=QrIlpJ6BuJADd_zF

MobiusHorizons 4 days ago | parent [-]

Obviously this is a total aside, but can anyone explain what’s going on with that? Is it just explaining terrain mapping in an intentionally obtuse way, or is it some kind of parody? It’s really frustrated me because lots of people like to trot it out like a meme in the same way it was here.

Bairfhionn 4 days ago | parent | next [-]

It's a joke (meme, copypasta) but it's also accurate.

As I understood: Missile uses gyros and accelerometers to figure out how far it flew already towards the target. This is not 100% accurate.

So it additionally uses terrain mapping to figure out how it looks down there. Compare ground (am I flying above a slope?) to its internal maps and it can figure out where it is and adjust path if off course.

So it figured out the position by knowing that the position is most likely not the position where it's supposed to be.

chrisandchris 4 days ago | parent | prev [-]

> Is it just explaining terrain mapping in an intentionally obtuse way

AFAIK, yes, that's what it actually was (or the basic sentiment actually came from). It was some part of training video in the US army/navy (don't know exactly), but the actual story or origin remains a bit unclear. At least my search within the web did not yield any (assumingly) valid source. I think at some point it just got to "it is what it is now" because it is a very, very complicated way to explain this stuff.

prisenco 4 days ago | parent | prev | next [-]

I took a creative writing class and we had a portion where we read and analyzed bad writing then took good writing and rewrote it poorly and those were the most helpful writing exercises I've ever done.

adhamsalama 4 days ago | parent | prev [-]

https://orlybooks.com/

KronisLV 4 days ago | parent | prev | next [-]

I really like the idea of finding what dials can be turned and then doing so and seeing how much headroom you normally have and how far away from things breaking you are.

Plus, those sorts of artificial limitations can be really helpful in finding where your systems bottleneck and what are the main optimizations that you should do (for example, I've seen N+1 problem sneak past because at the time the performance was good enough but once there was enough data it crumbled).

gregmac 3 days ago | parent [-]

> for example, I've seen N+1 problem sneak past because at the time the performance was good enough but once there was enough data it crumbled

This is a super-common problem I've had to help with many times. It generally happens with teams working on single-tenant (or single-database-tenant) products, and basically always comes back to the dev team working on a database with hundreds of things, and there's a handful of customers using 10,000+ things when it starts getting slow.

You acquire a bunch of small customers and everything is great, and it can months later before you really start to see performance impact. And often with these types of problems it just gradually slows down until finally the system hits a wall where the DB can no longer compensate and suddenly performance tanks. "It must be caused by something we just added, but it's weird, because this problem is in a totally different part of the system."

nicoburns 3 days ago | parent [-]

It's one of the things where I've found code review to be the most helpful remedy. If everything gets reviewed then N+1 queries can be rejected before they even hit the main branch. They're usually pretty obvious if you're looking for them.

gerdesj 4 days ago | parent | prev | next [-]

I think this is absolute genius.

If you are ever going to get to grips with optimizing something, why not do the opposite first or as a foil?

How often do you really fenangle your database (or other system) properly? Are your performance enhancements really based on science or cargo culting or something else?

sschnei8 4 days ago | parent [-]

I always do that when working with a new cloud provider. I spend our Series A as fast as possible after which we can optimize cloud spend!

guiriduro 3 days ago | parent | next [-]

Yup that's a variation of the general 'deliver a hugely inefficient solution first, then iterate' tactic. Keeps multiple people employed much longer than a little (debatably) "premature" optimisation ahead of time, keep delivering benefits QoQ (that could have been delivered sooner, but whatever...), keep managers happy.

therein 4 days ago | parent | prev [-]

Cloud providers love this one trick.

burnt-resistor 4 days ago | parent | prev | next [-]

And forget indexes, multiple tables, transactions, entity relationships, and referential integrity. Use a single table as a KVS NoSQL SQL for all data just like TRIRIGA's early versions did.

tracker1 3 days ago | parent [-]

I wrote an identity/auth service that had the backend like that intentionally. I wanted to be able to adapt it to pretty much any backend that was desired. Each identity had 4 entries with predictable keys, and the data was json that was optionally encrypted through the app, if using a dbms that didn't provide encryption at rest.

The data access interface was simple enough, that it allowed interface layering allowing for different backends, encryption, etc. to be very flexible.

I had adapters for SQLite (default), MS-SQL, PostgreSQL, Redis, Cassandra and AWS DynamoDB. It was deployed for apps that went from embedded, to integrated with our applications, and external applications on internal and external infrastructures.

The only disappointing thing is that I wasn't allowed to open-source/release the application. The concern is it would be a security risk if anyone knew how the application worked. I still disagree with this decision. It was a bit simpler, and more flexible than "Identity Server" in use, but it served the needs of a lot of applications where an existing identity provider (IS, Octa, AD Auth, etc) wasn't already in place. Most of the apps I worked on at that company had used JWT allowing for somewhat flexible configurations of RSA signed providers... integration usually consisted of a bridge service for authentication going from the client's provider to an app targeted JWT. This kept integrations separate.

Sorry for veering off a bit.

burnt-resistor 3 days ago | parent [-]

> The concern is it would be a security risk if anyone knew how the application worked

Uh, that's the classic security through obscurity (STO) defense on the face of it but it sounds like a cop-out to not open source it. If one desires not to open something that's fine, but if they misrepresented the reason for doing so would be bullshit.

rtpg 4 days ago | parent | prev | next [-]

I really want to see some operational setups be available as a sort of playground for observability tooling.

A decent sized SaaS-like with usage simulation, and a postgres/rabbit setup that is merely fine. A place to check if your debugging tooling/strategy is upt to snuff.

KnuthIsGod 4 days ago | parent | prev | next [-]

The Defence of Duffer's Drift is an early example of this genre.

In the first story in the book you learn how to do platoon level tactics badly( and lose most of your men )

In each subsequent story, a few tactical parameters are tweaked and the story is recounted again, with improved results.

The same approach is used in newer tactical books like Musicians of Mars 2.

The first Team Badger story in Musicians is very similar to the first story in Duffer's Drift, allowing for changes in location, equipment and techniques..

Warning: PDFs below:

The Defence of Duffer's Drift https://www.armyupress.army.mil/Portals/7/combat-studies-ins...

Musicians of Mars 2 https://api.army.mil/e2/c/downloads/2023/01/19/5a01ae1c/16-1...

"The lingering sulfur odor of gunpowder, mixed in the haze of the dust and smoke of battle, dominated the senses of CPT Fred Morris, commander of Team Badger. He sat amid the wreckage of what used to be some of the world’s finest combat machines, M1A2 Abrams tanks and M2A3 Bradley fighting vehicles.

Now most of his company’s tanks and Brads were mere smoking hulks, emanating the distinct smell of burning electronics to add another element to the senses of the dismal scene before him.

He sat, quietly reflecting on the sequence of events that led to the destruction of his beloved team. His mind raced as he recounted how the enemy force, so much more formidable than he had expected, employed assets, capabilities, and tactics that he had not taken into account. At the same time, he wondered why he was unable to bring to bear his own extensive combat power to earn decisive victory. He had felt so ready, so confident, and believed that his team had prepared so diligently.

Yet, still they had been defeated so soundly. How could that be? LTC Joe Milner, the task force commander, had given Team Badger the task to defend the center of the task force main battle area (MBA) in an area defense to destroy the attacking enemy motorized rifle brigade. Specifically, CPT Morris and Team Badger were to defend against the enemy’s main effort motorized rifle battalion in Battle Position (BP) Badger, along the enemy’s most likely avenue of approach.

That motorized rifle battalion plowed through Team Badger as if it were not even there."

DicIfTEx 4 days ago | parent [-]

See also the films Groundhog Day and (especially) Edge of Tomorrow.

Also, this contemporary homage published recently on a British military blog: https://wavellroom.com/2025/07/25/defence-baltic-bridge-drea...

scns 4 days ago | parent [-]

Naked from 2017 too. Got only 5.4 on IMDB but i liked it.

falconertc 3 days ago | parent | prev | next [-]

Great read, and an interesting warning sign about overly-controllable configuration files. If there's no conceivable good reason to to much of this, then the ability to do so becomes bad design that fails to protect your users.

rom16384 3 days ago | parent [-]

The Postgres defaults in debian are understandably conservative to make it run everywhere. When I first installed Postgres I found it rather slow even on a good server. Only after reading guides on the web I found out why, the default RAM usage is limited to something like 32 MB which is way too low.

tracker1 3 days ago | parent | prev | next [-]

A practical use of this would be in helping to determine where memory utilization typically is vs. where it really needs to be and which knobs are tuned where/how.

If you wanted to run PostgreSQL in a lower memory device, such as a lower end RPi, then you might want specific features tuned down based on your real workloads. Especially if you are considering say SQLite, but aren't sure how it will perform vs. putting say half the system's resources to PG.

stephenlf 4 days ago | parent | prev | next [-]

Love the B Sanderson shoutout

shmerl 4 days ago | parent | prev | next [-]

Reminds A Ticket To Tranai.

sastraxi 4 days ago | parent | prev | next [-]

Great writing style and articulation of thought! That was a fun read

robertclaus 4 days ago | parent | prev | next [-]

This was great. The mention of deadlocks at the end makes me wonder if there's also some transaction isolation settings to adjust.

Waterluvian 4 days ago | parent | prev | next [-]

Honestly, walking away in the wrong direction from default settings seems like a great way to really get a feel for their weight.

travisgriggs 4 days ago | parent | prev | next [-]

Felt like a synergy of Hyperbole and a Half and db admin. Reading it made my day and taught me a few things too.

samgranieri 3 days ago | parent | prev | next [-]

You are an absolute evil monster. What an entertaining article!

oceanparkway 4 days ago | parent | prev | next [-]

We need more of this

peterpost2 4 days ago | parent | prev | next [-]

This is terrific.

snickerdoodle12 4 days ago | parent | prev | next [-]

Excellent. Now do 42,000x faster please.

mort96 4 days ago | parent | next [-]

You never specified a baseline. They could undo their changes and it's 42,000x faster again :)

jiggawatts 3 days ago | parent | prev [-]

My record for DB query performance improvement is 13,000:1 and I was called a liar and a cheat to my face and then the customer walked out of the meeting.

I think if someone achieved 42,000:1 they'd be accused of sorcery and burned at the stake.

rompic 3 days ago | parent [-]

My best was 5000:1 by updating the outdated SQLite in one of the companies' products.

I found that we were hit by a bug that was fixed 6 years before I discovered it (https://sqlite.org/src/info/6f2222d550f5b0ee7ed). Sqlite's query planner assumed that a field with a not null constraint can never be null, which isn't the case for the right hand table in a left join.

preinheimer 4 days ago | parent | prev [-]

Love it.