▲ | victorbjorklund 5 days ago | |
Not sure I agree. First of all it can be more performant. Say you fetch 1000 records. And we need to join on a table where these 1000 records just got 2 different foreign keys. Instead of joing in db and fetching a lot more data we can do two queries and join in app instead. Secondly, makes it easier to cache data. Lets say the thing we joing with almost never changes (like some country info) we can cache that and just join it with the data from the db. Not saying this should always be the case, but sometimes it is the right call. | ||
▲ | teraflop 5 days ago | parent [-] | |
But as a counterpoint to that, (a) the database has its own caching built in, which you don't have to implement, and (b) the database knows when to invalidate its cache. To quote Douglas Adams: "The major difference between a thing that might go wrong and a thing that cannot possibly go wrong is that when a thing that cannot possibly go wrong goes wrong it usually turns out to be impossible to get at or repair." Likewise, if you cache a piece of data in your application because you assume that it won't change, that just makes it likely that if and when it does change, you'll have bugs. Moving the cache to the database layer so that it can be properly invalidated fixes this. It's true that an application-side join can still be more performant if the DB cache isn't good enough, but IMO you should only take that step after actually profiling your queries. |