Remix.run Logo
cogman10 5 hours ago

Easy to get wrong as well.

There's a balance with a DB. Doing 1 or 2 row queries 1000 times is obviously inefficient, but making a 1M row query can have it's own set of problems all the same (even if you need that 1M).

It'll depend on the hardware, but you really want to make sure that anything you do with a DB allows for other instances of your application a chance to also interact with the DB. Nothing worse than finding out the 2 row insert is being blocked by a million row read for 20 seconds.

There's also a question of when you should and shouldn't join data. It's not always a black and white "just let the DB handle it". Sometimes the better route to go down is to make 2 queries rather than joining, particularly if it's something where the main table pulls in 1000 rows with only 10 unique rows pulled from the subtable. Of course, this all depends on how wide these things are as well.

But 100% agree, ORMs are the worst way to handle all these things. They very rarely do the right thing out of the box and to make them fast you ultimately end up needing to comprehend the SQL they are emitting in the first place and potentially you end up writing custom SQL anyways.

liampulles 3 hours ago | parent | next [-]

I agree with you fully yes. One has to watch out for overwhelmingly large or locking queries.

philipwhiuk 5 hours ago | parent | prev [-]

ORMs are a caching layer for dev time.

They store up conserved programming time and then spend it all at once when you hit the edge case.

If you never hit the case, it's great. As soon as you do, it's all returned with interest :)

xigoi 4 hours ago | parent [-]

The question is why we don’t have database management systems that integrate tightly with the progmming language. Instead we have to communicate between two different paradigms using a textual language, which is itself inefficient.

runroader 4 hours ago | parent | next [-]

We tried that in 90’s RAD environments like Foxpro and others. If it fits the problem, they were great! If not, it’s even worse than with an ORM. They rarely fit today since they were all (or mostly) local-first or even local-only. Scaling was either not possible or pretty difficult.

Shorel 4 hours ago | parent | prev | next [-]

Because every single database vendor will try to lock down their users to their DBMS.

Oracle is a prime example of this. Stored procedures are the place to put all business logic according to Oracle documentation.

This caused backslash from escaping developers who then declared business logic should never be inside the database. To avoid vendor lock-in.

There's no ideal solution, just tradeoffs.

cogman10 4 hours ago | parent [-]

> Because every single database vendor will try to lock down their users to their DBMS.

I mean, that already happens. It's quite rare to see someone migrate from one database to another. Even if they stuck to pure SQL for everything, it's still a pretty daunting process as Postgres SQL and MSSQL won't be the same thing.

ghurtado 39 minutes ago | parent [-]

> It's quite rare to see someone migrate from one database to another.

I'm not discounting the level of effort involved, but I think the reason you don't see this often is because it is rare that simply changing DBMS systems is beneficial in and of itself.

And even if it was frictionless (ie: if we had discovered ORM Samarkanda), the real choices are so limited that even if you did it regularly, you would soon run out of DBMSs to try.

ivan_gammel 4 hours ago | parent | prev [-]

The answer is simple: model optimized for storage and model designed for processing are two different things. The languages used to describe and query them have to be different.

ghurtado 38 minutes ago | parent [-]

> The languages used to describe and query them have to be different.

Absolutely not.

That which is asserted without evidence can be dismissed without evidence.