| ▲ | francislavoie 3 hours ago | |||||||||||||||||||||||||
Restarting the DB is unfortunately way too slow. We run the DB in a docker container with a tmpfs (in-memory) volume which helps a lot with speed, but the problem is still the raw compute needed to wipe the tables and re-fill them with the fixtures every time. | ||||||||||||||||||||||||||
| ▲ | ikatson an hour ago | parent | next [-] | |||||||||||||||||||||||||
How about do the changes then bake them into the DB docker image. I.e. "docker commit". Then spin up the dB using that image instead of an empty one for every test run. This implies starting the DB through docker is faster than what you're doing now of course. | ||||||||||||||||||||||||||
| ||||||||||||||||||||||||||
| ▲ | renewiltord an hour ago | parent | prev [-] | |||||||||||||||||||||||||
I have not done this so it’s theorycrafting but can’t you do the following? 1. Have a local data dir with initial state 2. Create an overlayfs with a temporary directory 3. Launch your job in your docker container with the overlayfs bind mount as your data directory 4. That’s it. Writes go to the overlay and the base directory is untouched | ||||||||||||||||||||||||||
| ||||||||||||||||||||||||||