Remix.run Logo
Kim_Bruning a day ago

Now I'm wondering why this works. DNA clearly has some interesting redundancy strategies. (it might also depend on genome?)

dwattttt a day ago | parent | next [-]

The FASTA format stores nucleotides in text form... compression is used to make this tractable at genome sizes, but it's by no means perfect.

Depending on what you need to represent, you can get a 4x reduction in data size without compression at all, by just representing a GATC with 2 bits, rather than 8.

Compression on top of that "should" result in the same compressed size as the original text (after all, the "information" being compressed is the same), except that compression isn't perfect.

Newlines are an example of something that's "information" in the text format that isn't relevant, yet the compression scheme didn't know that.

hyghjiyhu a day ago | parent [-]

I think one important factor you missed to account for is frameshifting. Compression algorithms work on bytes - 8 bits. Imagine that you have the exact same sequence but they occur at different offsets mod 4. Then your encoding will give completely different results, and the compression algorithm will be unable to make use of the repetition.

dwattttt a day ago | parent [-]

I was actually under the impression compression algorithms tend to work over a bitstream, but I can't entirely confirm that.

Sesse__ a day ago | parent | next [-]

Some are bit-by-bit (e.g. the PPM family of compressors[1]), but the normal input granularity for most compressors is a byte. (There are even specialized ones that work on e.g. 32 bits at a time.)

[1] Many of the context models in a typical PPM compressor will be byte-by-byte, so even that isn't fully clear-cut.

bede 9 hours ago | parent | prev | next [-]

A Zstd maintainer clarified this: https://news.ycombinator.com/item?id=45251544

> Ultimately, Zstd is a byte-oriented compressor that doesn't understand the semantics of the data it compresses

vintermann a day ago | parent | prev [-]

They output a bitstream, yeah but I don't know of anything general purpose which effectively consumes anything smaller than bytes (unless you count various specialized handlers in general-purpose compression algorithms, e.g. to deal with long lists of floats)

vintermann a day ago | parent | prev [-]

This is a dataset of bacterial DNA. Any two related bacteria will have long strings of the same letters. But it won't be neatly aligned, so the line breaks will mess up pattern matching.

bede a day ago | parent | next [-]

Exactly. The line breaks break the runs of otherwise identical bits in identical sequences. Unless two identical subsequences are exactly in phase with respect to their line breaks, the hashes used for long range matching are different for otherwise identical subsequences.

amelius a day ago | parent | prev [-]

And the compressor does not think: "how can I make these two sequences align better without wasting a lot of space?"

ebolyen a day ago | parent | next [-]

No, because alignment, in the general case, is O(n^2). It is ironically one of the more tractable and well solved problems in bioinformatics.

tiagod a day ago | parent | prev [-]

The compressor doesn't think about anything. Also, Zstd doesn't have the goal of reaching the highest possible compression ratio. It's more geared toward lowest overhead, high bandwidth compress/decompress.