| ▲ | unconed 5 hours ago | |
Sorry but this post is the blind leading the blind, pun intended. Allow me to explain, I have a DSP degree. The reason the filters used in the post are easily reversible is because none of them are binomial (i.e. the discrete equivalent of a gaussian blur). A binomial blur uses the coefficients of a row of Pascal's triangle, and thus is what you get when you repeatedly average each pixel with its neighbor (in 1D). When you do, the information at the Nyquist frequency is removed entirely, because a signal of the form "-1, +1, -1, +1, ..." ends up blurred _exactly_ into "0, 0, 0, 0...". All the other blur filters, in particular the moving average, are just poorly conceived. They filter out the middle frequencies the most, not the highest ones. It's equivalent to doing a bandpass filter and then subtracting that from the original image. Here's an interactive notebook that explains this in the context of time series. One important point is that the "look" that people associate with "scientific data series" is actually an artifact of moving averages. If a proper filter is used, the blurryness of the signal is evident. https://observablehq.com/d/a51954c61a72e1ef | ||
| ▲ | the_fall 19 minutes ago | parent | next [-] | |
If you have an endless pattern of ..., -1, 1, -1, 1, -1, 1, ... and run box blur with a window of 4, you get ..., 0, 0, 0, 0, 0, 0, ... too. Other than that, you're not wrong about theoretical Gaussian filters applied to infinite data, but this has little to do with the scenario the article talks about, which is essentially about the information leaked at the edge of the blur region and then by the discrete step of a finite-size sliding window. | ||
| ▲ | jerf 2 hours ago | parent | prev | next [-] | |
"In today’s article, we’ll build a rudimentary blur algorithm and then pick it apart." Emphasis mine. Quote from the beginning of the article. This isn't meant to be a textbook about blurring algorithms. It was supposed to be a demonstration of how what may seem destroyed to a causal viewer is recoverable by a simple process, intended to give the viewer some intuition that maybe blurring isn't such a good information destroyer after all. Your post kind of comes off like criticizing someone for showing how easy it is to crack a Caesar cipher for not using AES-256. But the whole point was to be accessible, and to introduce the idea that just because it looks unreadable doesn't mean it's not very easy to recover. No, it's not a mistake to be using the Caesar cipher for the initial introduction. Or a dead-simple one-dimensional blurring algorithm. | ||
| ▲ | yunnpp 3 hours ago | parent | prev | next [-] | |
Interesting...I've used moving averages not thinking too hard about the underlying implications. Do you recommend any particular book or resource on DSP basics for the average programmer? | ||
| ▲ | jszymborski 3 hours ago | parent | prev [-] | |
> Sorry but this post is the blind leading the blind, pun intended. Allow me to explain, I have a DSP degree. FWIW, this does not read as constructive. | ||