Remix.run Logo
francislavoie 3 hours ago

Restarting the DB is unfortunately way too slow. We run the DB in a docker container with a tmpfs (in-memory) volume which helps a lot with speed, but the problem is still the raw compute needed to wipe the tables and re-fill them with the fixtures every time.

ikatson an hour ago | parent | next [-]

How about do the changes then bake them into the DB docker image. I.e. "docker commit".

Then spin up the dB using that image instead of an empty one for every test run.

This implies starting the DB through docker is faster than what you're doing now of course.

francislavoie an hour ago | parent [-]

Yeah there's absolutely no way restarting the container will be faster.

renewiltord an hour ago | parent | prev [-]

I have not done this so it’s theorycrafting but can’t you do the following?

1. Have a local data dir with initial state

2. Create an overlayfs with a temporary directory

3. Launch your job in your docker container with the overlayfs bind mount as your data directory

4. That’s it. Writes go to the overlay and the base directory is untouched

francislavoie an hour ago | parent [-]

But how does the reset happen fast, the problem isn't with preventing permanent writes or w/e, it's with actually resetting for the next test. Also using overlayfs will immediately be slower at runtime than tmpfs which we're already doing.

peterldowns an hour ago | parent [-]

Yeah unfortunately I think that it's not really possible to hit the speed of a TEMPLATE copy with MariaDB. @EvanElias (maintainer of https://github.com/skeema/skeema about this) was looking into it at one point, might consider reaching out to him — he's the foremost mysql expert that I know.

evanelias 10 minutes ago | parent [-]

Thanks for the kind words Peter!

There's actually a potential solution here, but I haven't personally tested it: transportable tablespaces in either MySQL [1] or MariaDB [2].

The basic idea is it allows you to take pre-existing table data files from the filesystem and use them directly for a table's data. So with a bit of custom automation, you could have a setup where you have pre-exported fixture table data files, which you then make a copy of at the filesystem level, and then import as tablespaces before running each test. So a key step is making that fs copy fast, either by having it be in-memory (tmpfs) or by using a copy-on-write filesystem.

If you have a lot of tables then this might not be much faster than the 0.5-2s performance cited above though. iirc there have been some edge cases and bugs relating to the transportable tablespace feature over the years as well, but I'm not really up to speed on the status of that in recent MySQL or MariaDB.

[1] https://dev.mysql.com/doc/refman/8.0/en/innodb-table-import....

[2] https://mariadb.com/docs/server/server-usage/storage-engines...