| ▲ | creatonez 7 hours ago | |
For the simple case, it isn't necessarily that fragile. Write the entire database to a temp file, then after flushing, move the temp file to overwrite the old file. All Unix filesystems will ensure the move operation is atomic. Lots of "we dump a bunch of JSON to the disk" use cases could be much more stable if they just did this. Doesn't scale at all, though - all of the data that needs to be self-consistent needs to be part of the same file, so unnecessary writes go through the roof if you're only doing small updates on a giant file. Still gotta handle locking if there is risk of a stray process messing it up. And doing this only handles part of ACID. | ||
| ▲ | hunterpayne an hour ago | parent | next [-] | |
"All Unix filesystems will ensure the move operation is atomic." This is false, but most fs will. However, there is a lot of fs calls you have to make that you probably don't know about to make the fs operations atomic. PS The way you propose is probably the hardest way to do an atomic FS operation. It will have the highest probably of failure and have the longest period of operations and service disruption. There is good reason we move rows one at a time or in batches sized to match OS buffers. | ||
| ▲ | jeffffff 6 hours ago | parent | prev [-] | |
don't forget to fsync the file before the rename! and you also need to fsync the directory after the rename! | ||