Remix.run Logo
jgavris a day ago

The Django ORM / migrations are still basically unmatched in happiness factor.

hansonkd a day ago | parent | next [-]

Its crazy to me after all these years that django-like migrations aren't in every language. On the one hand they seem so straightforward and powerful, but there must be some underlying complexities of having it autogenerate migrations.

Its always a surprise when i went to Elixir or Rust and the migration story was more complicated and manual compared to just changing a model, generating a migration and committing.

In the pre-LLM world, I was writing ecto files, and it was super repetitive to define make large database strucutres compared to Django.

igsomething a day ago | parent | next [-]

Going from Django to Phoenix I prefer manual migrations. Despite being a bit tedious and repetitive, by doing a "double pass" on the schema I often catch bugs, typos, missing indexes, etc. that I would have missed with Django. You waste a bit of time on the simple schemas, but you save a ton of time when you are defining more complex ones. I lost count on how many bugs were introduced because someone was careless with Django migrations, and it is also surprising that some Django devs don't know how to translate the migrations to the SQL equivalent.

At least you can opt-in to automated migrations in Elixir if you use Ash.

limagnolia 9 hours ago | parent [-]

Django doesn't force anyone to use the automatic migrations, you can always write them manually if you want to :)

wiredfool a day ago | parent | prev | next [-]

There are some subtle edge cases in the django migrations where doing all the migrations at once is not the same as doing migrations one by one. This has bitten me on multiple django projects.

cuu508 20 hours ago | parent [-]

Can you give an example how this would happen?

wiredfool 20 hours ago | parent [-]

Ok, from memory --

There's a pre, do and post phase for the migrations. When you run a single migration, it's: pre, do, post. When you run 2 migrations, it's: pre [1,2], do: [1,2], post: [1,2].

So, if you have a migration that depends on a previous migration's post phase, then it will fail if it is run in a batch with the previous migration.

When I've run into this is with data migrations, or if you're adding/assigining permissions to groups.

selcuka 3 hours ago | parent | next [-]

Did you mean migration signals (pre_migrate and post_migrate)? They are only meant to run before and after the whole migration operation, regardless of how many steps are executed. They don't trigger for each individual migration operation.

The only catch is they will run multiple times, once for each app, but that can also be prevented by passing a sender (e.g. `pre_migrate.connect(pre_migrate_signal_handler, sender=self)` if you are registering them in your AppConfig.ready method).

hansonkd 17 hours ago | parent | prev | next [-]

Does that affect the autogenerated migrations at all? Teh only time I ran into that issue as if I generated a table, created a data migration and then it failed because the table was created same transaction. Never had a problem with autogenerated migrations.

advisedwang 11 hours ago | parent | prev | next [-]

What a crazy design, why don't they just do pre1 do1 post1 pre2 do2 post2?

Izkata 7 hours ago | parent | prev | next [-]

This doesn't sound at all familiar, are you sure you're not mixing it up with something else?

brianwawok 19 hours ago | parent | prev [-]

There’s like an atomic flag you can pull it out of the transaction . Solves a lot of these issues.

dnautics a day ago | parent | prev | next [-]

well in elixir you can have two schemas for the same table, which could represent different views, for example, an admin view and a user view. this is not (necessarily) for security but it reduces the number of columns fetched in the query to only what you need for the purpose.

a day ago | parent [-]
[deleted]
IceDane a day ago | parent | prev [-]

There is no way to autogenerate migrations that work in all cases. There are lots of things out there that can generate migrations that work for most simple cases.

hansonkd 17 hours ago | parent | next [-]

They don't need to work in every case. For the past `~15 years 100% of the autogenerated migrations to generating tables, columns or column names I have made just work. and i have made thousands of migrations at this point.

The only thing to manually migrate are data migrations from one schema to the other.

etchalon a day ago | parent | prev | next [-]

Django manages to autogenerate migrations that work in the VAST majority of cases.

frankwiles 21 hours ago | parent | prev | next [-]

I end up needing to write a manual migration maybe once every other year in real world use.

boxed a day ago | parent | prev [-]

That's why you can do your own migrations in Django for those edge cases.

Humphrey 3 hours ago | parent | prev | next [-]

100%

I am quite surprised that most languages do not have an ORM and migrations as powerful as Django. I get that it's Python's dynamic Meta programming that makes it such as clean API - but I am still surprised that there isn't much that comes close.

ndr 20 hours ago | parent | prev | next [-]

I found it very lacking in how to do CD with no downtime.

It requires a particular dance if you ever want to add/delete a field and make sure both new-code and old-code work with both new-schema and old-schema.

The workaround I found was to run tests with new-schema+old-code in CI when I have schema changes, and then `makemigrations` before deploying new-code.

Are there better patterns beyond "oh you can just be careful"?

rorylaitila 19 hours ago | parent | next [-]

I simplify it this way. I don't delete fields or tables in migrations once an app is in production. Only manually clean them up after they are impossible to be used by any production version. I treat the database schema as-if it were "append only" - Only add new fields. This means you always "roll-forward", a database. Rollback migrations are 'not a thing' to me. I don't rename physical columns in production. If you need an old field and a new field to be running simultaneously that represent the same datum, a trigger keeps them in sync.

rtpg 3 hours ago | parent | prev | next [-]

Here's a checklist I wrote way back.

https://rtpg.co/2021/06/07/changes-checklist.html

I've been meaning to write an interactive version to sort of "prove" that you really can't do much better than this, at least in general cases.

tmarice 17 hours ago | parent | prev | next [-]

This is not specific to Django, but to any project using a database. Here's a list of a couple quite useful resources I used when we had to address this:

* https://github.com/tbicr/django-pg-zero-downtime-migrations

* https://docs.gitlab.com/development/migration_style_guide/

* https://pankrat.github.io/2015/django-migrations-without-dow...

* https://www.caktusgroup.com/blog/2021/05/25/django-migration...

* https://openedx.atlassian.net/wiki/spaces/AC/pages/23003228/...

Generally it's also advisable to set a statement timeout for migrations otherwise you can end up with unintended downtime -- ALTER TABLE operations very often require ACCESS EXCLUSIVE lock, and if you're migrating a table that already has an e.g. very long SELECT operation from a background task on it, all other SELECTs will queue up behind the migration and cause request timeouts.

There are some cases you can work around this limitation by manually composing operations that require less strict locks, but in our case, it was much simpler to just make sure all Celery workers were stopped during migrations.

senko 19 hours ago | parent | prev | next [-]

You can do three stage:

1. Make a schema migration that will work both with old and new code

2. Make a code change

3. Clean up schema migration

Example: deleting a field:

1. Schema migration to make the column optional

2. Remove the field in the code

3. Schema migration to remove the column

Yes, it's more complex than creating one schema migration, but that's the price you pay for zero-downtime. If you can relax that to "1s downtime midnight on sunday", you can keep things simpler. And if you do so many schema migrations you need such things often ... I would submit you're holding it wrong :)

ndr 18 hours ago | parent | next [-]

I'm doing all of these and None of it works out of the box.

Adding a field needs a default_db, otherwise old-code fails to `INSERT`. You need to audit all the `create`-like calls otherwise.

Deleting similarly will make old-code fail all `SELECT`s.

For deletion I need a special 3-step dance with managed=False for one deploy. And for all of these I need to run old-tests on new-schema to see if there's some usage any member of our team missed.

jgavris 19 hours ago | parent | prev [-]

I was just in the middle of writing something similar above, thanks!

aljarry 19 hours ago | parent | prev | next [-]

One option is to do multi-stage rollout of your database schema and code, over some time windows. I recall a blog post here (I think) lately from some Big Company (tm) that would run one step from the below plan every week:

1. Create new fields in the DB.

2. Make the code fill in the old fields and the new fields.

3. Make the code read from new fields.

4. Stop the code from filling old fields.

5. Remove the old fields.

Personally, I wouldn't use it until I really need it. But a simpler form is good: do the required schema changes (additive) iteratively, 1 iteration earlier than code changes. Do the destructive changes 1 iteration after your code stops using parts of the schema. There's opposite handling of things like "make non-nullable field nullable" and "make nullable field non-nullable", but that's part of the price of smooth operations.

Izkata 7 hours ago | parent [-]

2.5 (if relevant) mass-migrate data from the old column to the new column, so you don't have to wait forever.

m000 19 hours ago | parent | prev | next [-]

Deploying on Kubernetes using Helm solves a lot of these cases: Migrations are run at the init stage of the pods. If successful, pods of the new version are started one by one, while the pods of the new version are shutdown. For a short period, you have pods of both versions running.

When you add new stuff or make benign modifications to the schema (e.g. add an index somewhere), you won't notice a thing.

If the introduced schema changes are not compatible with the old code, you may get a few ProgramingErrors raised from the old pods, before they are replaced. Which is usually acceptable.

There are still some changes that may require planning for downtime, or some other sort of special handling. E.g. upgrading a SmallIntegerField to an IntegerField in a frequently written table with millions of rows.

ndr 18 hours ago | parent [-]

Without care new-schema will make old-code fail user requests, that is not zero downtime.

m000 17 hours ago | parent [-]

A request not being served can happen for a multitude of reasons (many of them totally beyond your control) and the web architecture is designed around that premise.

So, if some of your pods fail a fraction of the requests they receive for a few seconds, this is not considered downtime for 99% of the use cases. The service never really stopped serving requests.

The problem is not unique to Django by any means. If you insist on being a purist, sure count it as downtime. But you will have a hard time even measuring it.

jgavris 19 hours ago | parent | prev [-]

The general approach is to do multiple migrations (add first and make new-code work with both, deploy, remove old-code, then delete old-schema) and this is not specific to Django's ORM in any way, the same goes for any database schema deployment. Take a peek at https://medium.com/@pranavdixit20/zero-downtime-migrations-i... for some ideas.

dnautics a day ago | parent | prev | next [-]

oh the automatic migrations scare the bejesus out of me. i really prefer writing out schemas and migrations like in elixir/ecto. plus i like the option of having two different schemas for the same table (even if i never use it)

dxdm 21 hours ago | parent | next [-]

You can ask Django to show you what exact SQL will run for a migration using `manage.py sqlmigrate`.

You can run raw SQL in a Django migration. You can even substitute your SQL for otherwise autogenerated operations using `SeparateDatabaseAndState`.

You have a ton of control while not having to deal with boilerplate. Things usually can just happen automatically, and it's easy to find out and intervene when they can't.

https://docs.djangoproject.com/en/6.0/ref/django-admin/#djan...

https://docs.djangoproject.com/en/6.0/ref/migration-operatio...

gtaylor a day ago | parent | prev | next [-]

The nice thing in this case is that Django will meet you where you are with your preferences. Want to go the manual route? Sure. Want it to take a shot at auto-generation and then you customize? Very doable and. Want to let Django take the wheel fully the majority of the time? Sure.

dnautics 11 hours ago | parent [-]

is this like the "it takes 50 hours to set up a project management tool to work the way you want"? what happens if you onboard a superstar that works with django some other way?

lmm 5 hours ago | parent | next [-]

No. Django is very good at having the autogenerated/default stuff be consistent with what you do if you want to write manually, it's not one of those "if you want to use the magic as-is it all just works, if you want to customize even one tiny piece you have to manually replicate all of the magic parts" frameworks.

Izkata 6 hours ago | parent | prev | next [-]

Either way the end result is a single file in migrations/ that describes the change, though you do have to write it with Django's API if you want further migrations to work without issues (so no raw SQL, but this low-level API is things like CreateTable() and AddColumn() - and is what Django generates automatically from the models, so the auto-generated migrations are easily inspectable and won't change).

Nextgrid 8 hours ago | parent | prev [-]

> what happens if you onboard a superstar that works with django some other way

If you hired a "superstar" that goes out of their way to hand-write migrations in cases where Django can do it by default (the majority of them) you did not in fact get a superstar.

I have yet to see anyone hand-roll migrations on purpose. In fact the problem is usually the opposite, the built-in migration generator works so well that a lot of people have very little expertise is doing manual migrations because they maybe had to do it like 5 times in their entire career.

3eb7988a1663 a day ago | parent | prev [-]

I have never done it, but I believe you could setup multiple schemas under the same database -by faking it as different databases and then use a custom router to flip between them as you like.

That sounds like the path to madness, but I do believe it would work out of the box.

dnautics a day ago | parent [-]

sounds inconvenient and error-prone

3eb7988a1663 a day ago | parent [-]

It is not much code to setup the router. Now, why you would want to bounce between schemas, I do not have a good rationale, but whatever floats your boat.

dnautics 12 hours ago | parent [-]

yeah some frameworks call these "lenses". There's even crazy people who write lenses on top of elixir schemas because they dont realize you can just have multiple schemas.

maybe more concretely: if you have a table with a kajillion columns and you want performant views onto some column (e.g. "give me the metadata only and dont show me blobs columns") without pulling down the entire jungle in the sql request, There's that.

danmaz74 21 hours ago | parent | prev [-]

Have you ever tried Rails? I think that Django's approach on those is an adaptation from it.

jgavris 19 hours ago | parent [-]

Of course, ActiveRecord back in 2005.