| ▲ | locknitpicker 17 hours ago | |||||||
> Normalization is possible but not practical in a lot of cases: nearly every “legacy” database I’ve seen has at least one table that just accumulates columns because that was the quickest way to ship something. Strong disagree. I'll explain. Your argument would support the idea of adding a few columns to a table to get to a short time to market. That's ok. Your comment does not come close to justify why you would keep the columns in. Not the slightest. Tables with many columns create all sorts of problems and inefficiencies. Over fetching is a problem all on itself. Even the code gets brittle, where each and every single tweak risks beijg a major regression. Creating a new table is not hard. Add a foreign key, add the columns, do a standard parallel write migration. Done. How on earth is this not practical? | ||||||||
| ▲ | fiddlerwoaroof 7 hours ago | parent | next [-] | |||||||
I’m not justifying the design but splitting a table with several billion rows is not a trivial task, especially when ORMs and such are involved. Additionally, it’s easier to get work scheduled to ship a feature than it is to convince the relevant players to complete the swing. | ||||||||
| ||||||||
| ▲ | grey-area 16 hours ago | parent | prev | next [-] | |||||||
There are sometimes reasons this is harder in practice, for example let’s say the business or even third parties have access to this db directly and have hundreds of separate apps/services relying on this db (also an anti-pattern of course but not uncommon), that makes changing the db significantly harder. Mistakes made early on and not corrected can snowball and lead to this kind of mess, which is very hard to back out of. | ||||||||
| ▲ | magicalhippo 11 hours ago | parent | prev [-] | |||||||
> How on earth is this not practical? Fine, but you still need to read in those 100+ fields. So now you gotta contend with 20+ joins just to pull in one record. Not more practical than a single SELECT in my opinion. | ||||||||
| ||||||||