▲ | hamandcheese 5 days ago | |
> In fact, it will add more constraints to your design, because now you have different consumers and potentially writers all competing for the same resource with potentially different access patterns. Plus the maintenance overhead that migrations of such shared tables come with. And eventually you might have data in this table that are only needed for some of the services, so you now need to implement views and access controls at the DB level. PostgreSQL, to name one example, can handle every one of these challenges. | ||
▲ | paffdragon 5 days ago | parent [-] | |
It's not that it is not possible, but whether it's a good idea. The usual problem is that some team exposes one of their internal tables and they don't have control over what type of queries are run against it that could impact their service when the access patterns differ. Or when the external team is asking for extra fields that do not make sense for the owning team's model. Or adding some externally sourced information. Or the team moving from PostgreSQL to S3 or DynamoDB. And this is not an exhaustive list. An API layer is more flexible and can remain stable over a longer time than exposing internal implementation depending on a particular technology implemented in a particular way at the time they agreed on sharing. This is, of course, not a concern inside the same team or very closely working teams. They can handle the necessary coordination. So, there are always exceptions and simple use cases where DB access works just fine. Especially, if you don't already have an API, which could be a bigger investment to set up for something simple if it's not even known yet the idea will work etc. |