| ▲ | layer8 20 hours ago |
| Better fill those files with random bytes, to ensure the filesystem doesn’t apply some “I don’t actually have to store all-zero blocks” sparse-file optimization. To my knowledge no non-compressing file system currently does this, but who knows about the future. |
|
| ▲ | zrm an hour ago | parent | next [-] |
| A good way to do this is to create a swap file, both because then you can use it as a swap file until you need to delete it and because swap files are required to not be sparse. |
|
| ▲ | nyrikki 16 hours ago | parent | prev | next [-] |
| XFS, Ext4, btrfs etc… all support sparse files, so any app can cause problems you can try it with: dd if=/dev/zero of=sparse_file.img bs=1M count=0 seek=1024
If you add conv=sparse to the dd command with a smaller block size it will sparsify what you copy too, use the wrong cp command flags and they will explode.Much harder problem than the file system layers to deal with because the stat size will look smaller usually. |
| |
| ▲ | layer8 15 hours ago | parent [-] | | Creating sparse files requires the application to purposefully use special calls like fallocate() or seek beyond EOF, like dd with conv=sparse does. You won't accidentally create a sparse file just by filling a file with zeros. | | |
| ▲ | nyrikki 14 hours ago | parent [-] | | It is an observability issue, even zabbix tracked reserve space and inodes 20 years ago. Will dedupe,compression,sparse files you simply don’t track utilization by clients view, which is what du does. The concrete implementation is what matters and what is, as this case demonstrates, is what you should alert on. Inodes, blocks, extents etc.. are what matters, not the user view of data size. Even with rrdtool you could set reasonable alerts, but the heuristics of someone exploding a sparse file with a non-sparse copy makes that harder. Rsync ssh etc… will do that by default. |
|
|
|
| ▲ | freedomben 20 hours ago | parent | prev | next [-] |
| Yep, btrfs will happily do this to you. I verified it the hard way |
| |
| ▲ | kccqzy 19 hours ago | parent [-] | | Well btrfs supports compression so that’s understandable. However I personally prefer to control compression manually so it only compresses files marked by me for compression using chattr(1). | | |
| ▲ | freedomben 16 hours ago | parent [-] | | I've switched to that also. It surely wastes some space but being able to reason about file space is worth it to me for now |
|
|
|
| ▲ | ape4 20 hours ago | parent | prev [-] |
| If I recall correctly: dd if=/dev/urandom of=/home/myrandomfile bs=1 count=N
|
| |
| ▲ | bugfix 9 hours ago | parent | next [-] | | I just use fallocate to create a 1GB or 2GB file, depending on the total storage size. It has saved me twice now. I had a nasty issue with a docker container log quickly filling up the 1GB space before I could even identify the problem, causing the shell to break down and commands to fail. After that, I started creating a 2GB file. | |
| ▲ | tdeck 9 hours ago | parent | prev | next [-] | | Fwiw you can also do this with head -c 1G /dev/urandom > /home/myrandomfile
And not have to remember dd's bizarre snowflake command syntax. | |
| ▲ | Twirrim 17 hours ago | parent | prev | next [-] | | If you want to do it really quickly openssl enc -aes-256-ctr -pbkdf2 -pass pass:"$(date '+%s')" < /dev/zero | dd of=/home/myrandomfile bs=1M count=1024
Almost all CPUs have AES native instructions so you'll be able to produce pseudorandom junk really fast. Even my old system will produce it at about 3Gb/s. Much faster than urandom can go. | | |
| ▲ | ape4 15 hours ago | parent [-] | | That's very cool. Sadly running that exact command gets an incomplete file and error "error writing output file". It suggests adding iflag=fullblock (to dd). Running that makes a file of the correct size. But still gives "error writing output file". I suppose that occurs because dd breaks the pipe. | | |
| ▲ | Twirrim 10 hours ago | parent [-] | | Weird, I could have sworn that used to work, maybe I wrote the notes down wrong. Easiest alternative I guess is to pipe through head. It still grumbles, but it does work openssl enc -aes-256-ctr -pbkdf2 -pass pass:"$(date '+%s')" < /dev/zero | head -c 10M > foo
|
|
| |
| ▲ | dotancohen 4 hours ago | parent | prev | next [-] | | My choice has always been `shred`: $ sudo truncate --size 1G /emergency-space
$ sudo shred /emergency-space
I find it widely available, even in tiny distros. | |
| ▲ | fragmede 18 hours ago | parent | prev [-] | | bs=1 is a recipe for waiting far longer than you have to because of the overhead of the system calls. Better bs=N count=1 | | |
| ▲ | __david__ 17 hours ago | parent | next [-] | | That’s also not great if you’re trying to make a 10 gigabyte file. In that case, use bs=1M and count=SizeInMB. | | |
| ▲ | marcosdumay 17 hours ago | parent [-] | | Modern computers are crazily overengineered... Most current desktops (smaller than your usual server) won't have any problem with the GP's command. Yours is still better, of course. |
| |
| ▲ | 17 hours ago | parent | prev [-] | | [deleted] |
|
|