▲ | erikvdven 2 days ago | |
Did I mention anything about performance? If so, my apologies, I'll need to revise the article, because this really has very little to do with performance. In fact, a reader who emailed me ran into a challenge where, if you have an aggregate with just one entity, for example, Bookcase -> list[Book] , and that list grows significantly, it can lead to performance issues. In such cases, you might even need to consider a solution to improve upon that. But that's a separate topic. What I was trying to highlight earlier were the whys behind the approach. And based on the feedback over here, it might be a good idea to update the post. I really appreciate all your input. As for the whys: The less your domain knows about the outside world, the less often you need to change it when the outside world changes. And the easier it becomes for new team members to understand the logic. It also separates your database models from your domain models, which is great IMHO. It makes it easier to change them independent from each other. You could have both, separated domain models and database models or API models and use Pydantic for all these layers, but why would you do that? If you need to make the translation anyways, why not to pure dataclasses?: no extra mental models, no hidden framework magic, just business concepts in plain Python. This does depend on your specific situation however, there are enough reasons to not do this. But if your application grows in size and is not so much a simple CRUD application anymore, I wonder if there are enough reasons to NOT keep Pydantic in the outside layers. So yes, for small simple applications it might be overcomplicated to introduce the overhead when your data stay relatively consistent across layers. |