| ▲ | Libbbf: Bound Book Format, A high-performance container for comics and manga(github.com) | |||||||||||||||||||||||||||||||||||||||||||||||||||||||
| 61 points by zdw 6 hours ago | 29 comments | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | dfajgljsldkjag 4 hours ago | parent | next [-] | |||||||||||||||||||||||||||||||||||||||||||||||||||||||
The feature matrix says cbz/zip doesn't have random page access, but it definitely does. Zip also supports appending more files without too much overhead. Certainly there's a complexity argument to be made, because you don't actually need compression just to hold a bundle of files. But these days zip just works. The perf measurement charts also make no sense. What exactly are they measuring? Edit: This reddit post seems to go into more depth on performance: old.reddit.com/r/selfhosted/comments/1qi64pr/comment/o0pqaeo/ | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | grumbel 26 minutes ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||||||||||||||||||||||||||
This feels like the wrong end to optimize. Zip is plenty of fast, especially when it comes to a few hundred pages of a comic. Meanwhile the image decoding can take a while when you want to have a quick thumbnail overview showing all those hundred pages at once. No comic/ebook software I have ever touched as managed to match the responsiveness of an actual book where you can flip through those hundreds of pages in a second with zero loading time, despite it being somewhat trivial to implement when you generate the necessary thumbnail/image-pyramid data first. A multi-resolution image format would make more sense than optimizing the archive format. There would also be room for additional features like multi-language support, searchable text, … that the current "jpg in a zip" doesn't handle (though one might end up reinventing DJVU here). | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | lsbehe 31 minutes ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||||||||||||||||||||||||||
Why are the metadata blocks the way they are? I see you used pack directives but there already are plenty of padding and reserved bits. A 19 byte header just seems wrong. https://github.com/ef1500/libbbf/blob/b3ff5cb83d5ef1d841eca1... | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | its-summertime 4 hours ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||||||||||||||||||||||||||
https://www.reddit.com/r/selfhosted/comments/1qi64pr/i_got_i... | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | its-summertime 2 hours ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||||||||||||||||||||||||||
Thinking more about this: ZIP files can be set up to have the data on whatever alignment of one's choosing (as noted in the reddit thread). Integrity checks can be done in parallel by doing them in parallel. mmap is possible just by not using zip compression. The aspect of integrity checking speed in a saturated context (N workers, regardless if its multiple workers per file, or a worker per file), CRC32(C) seems to be nearly twice as fast https://btrfs.readthedocs.io/en/latest/Checksumming.html ZIP can also support arbitrary metadata. I think this could have all been backported to ZIP files themselves | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | riffraff 4 hours ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||||||||||||||||||||||||||
At a glance this looks like an obviously nicer format that a zip of jpegs, but I struggle to think of a time I thought "wow CBZ is a problem here". I didn't even realize random access is not possible, presumably because readers just support it by linear scanning or putting everything in memory at once, and comic size is peanuts compared to modern memory size. I suppose this becomes more useful if you have multiple issues/volumes in a single archive. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | remix2000 4 hours ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||||||||||||||||||||||||||
I thought zips already support random access? | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | PufPufPuf 2 hours ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||||||||||||||||||||||||||
"Native Data Deduplication" not supported in CBZ/CBR? But those are just ZIP/RAR, which are compression formats, deduplication is their whole deal...? | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | chromehearts 3 hours ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||||||||||||||||||||||||||
But with which library are you able to host these? And which scraper currently finds manga with chapters in that file format? does anybody have experience hosting their own manga server & downloading them? | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | sedatk 3 hours ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||||||||||||||||||||||||||
> Footer indexed So, like ZIP? > Uses XXH3 for integrity checks I don’t think XXH3 is suitable for that purpose. It’s not cryptographically secure and designed mostly for stuff like hash tables (e.g. relatively small data). | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | aidenn0 4 hours ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||||||||||||||||||||||||||
I assume the comparison table is supposed to have something other than footnotes (e.g. check-marks or X's)? That's not showing for me on Firefox | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | jmillikin 2 hours ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||||||||||||||||||||||||||
I use CBZ to archive both physical and digital comic books so I was interested in the idea of an improved container format, but the claimed improvements here don't make sense. --- For example they make a big deal about each archive entry being aligned to a 4 KiB boundary "allowing for DirectStorage transfers directly from disk to GPU memory", but the pages within a CBZ are going to be encoded (JPEG/PNG/etc) rather than just being bitmaps. They need to be decoded first, the GPU isn't going to let you create a texture directly from JPEG data. Furthermore the README says "While folders allow memory mapping, individual images within them are rarely sector-aligned for optimized DirectStorage throughput" which ... what? If an image file needs to be sector-aligned (!?) then a BBF file would also need to be, else the 4 KiB alignment within the file doesn't work, so what is special about the format that causes the OS to place its files differently on disk? Also in the official DirectStorage docs (https://github.com/microsoft/DirectStorage/blob/main/Docs/De...) it says this:
Where is the supposed 4 KiB alignment restriction even coming from?There are zip-based formats that align files so they can be mmap'd as executable pages, but that's not what's happening here, and I've never heard of a JPEG/PNG/etc image decoder that requires aligned buffers for the input data. Is the entire 4 KiB alignment requirement fictitious? --- The README also talks about using xxhash instead of CRC32 for integrity checking (the OP calls it "verification"), claiming this is more performant for large collections, but this is insane:
CRC32 is limited by memory bandwidth if you're using a normal (i.e. SIMD) implementation. Assuming 100 GiB/s throughput, a typical comic book page (a few megabytes) will take like ... a millisecond? And there's no data dependency between file content checksums in the zip format, so for a CBZ you can run the CRC32 calculations in parallel for each page just like BBF says it does.But that doesn't matter because to actually check the integrity of archived files you want to use something like sha256, not CRC32 or xxhash. Checksum each archive (not each page), store that checksum as a `.sha256` file (or whatever), and now you can (1) use normal tools to check that your archives are intact, and (2) record those checksums as metadata in the blob storage service you're using. --- The Reddit thread has more comments from people who have noticed other sorts of discrepancies, and the author is having a really difficult time responding to them in a coherent way. The most charitable interpretation is that this whole project (supposed problems with CBZ, the readme, the code) is the output of an LLM. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | yonisto 3 hours ago | parent | prev [-] | |||||||||||||||||||||||||||||||||||||||||||||||||||||||
Honest question, something I don't understand, if you use DirectStorage to move images directly to the GPU (I assume into the VRAM) where the decoding take place? directly on the GPU? Can GPU decode PNG? it is very unfriendly format for GPU as far as I know | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||