Remix.run Logo
vincenthwt 2 days ago

It really depends on the application. If the illumination is consistent, such as in many machine vision tasks, traditional thresholding is often the better choice. It’s straightforward, debuggable, and produces consistent, predictable results. On the other hand, in more complex and unpredictable scenes with variable lighting, textures, or object sizes, AI-based thresholding can perform better.

That said, I still prefer traditional thresholding in controlled environments because the algorithm is understandable and transparent.

Debugging issues in AI systems can be challenging due to their "black box" nature. If the AI fails, you might need to analyze the model, adjust training data, or retrain, a process that is neither simple nor guaranteed to succeed. Traditional methods, however, allow for more direct tuning and certainty in their behavior. For consistent, explainable results in controlled settings, they are often the better option.

shash 2 days ago | parent [-]

Not to mention performance. So often, the traditional method is the only thing that can keep up with performance requirements without needing massive hardware upgrades.

Counter intuitively, I’ve often found that CNNs are worse at thresholding in many circumstances than a simple otsu or adaptive threshold. My usual technique is to use the least complex algorithm and work my way up the ladder only when needed.

MassPikeMike a day ago | parent | next [-]

I am usually working with historical documents, where both Otsu and adaptive thresholding are frustratingly almost but not quite good enough. My go-to approach lately is "DeepOtsu" [1]. I like that it combines the best of both the traditional and deep learning worlds: a deep neural net enhances the image such that Otsu thresholding is likely to work well.

[1] https://arxiv.org/abs/1901.06081

shash a day ago | parent [-]

Ok. Those are impressive results. Nice addition to the toolbox

hansvm a day ago | parent | prev [-]

Something I've had a lot of success with (in cases where you're automating the same task with the same lighting) is having a human operator manually choose a variety of in-sample and out-of-sample regions, ideally with some of those being near real boundaries. Then train a (very simple -- details matter, but not a ton) local model to operate on small image patches and output probabilities for each pixel.

One fun thing is that with a simple model it's not much slower than techniques like otsu (you're still doing a roughly constant amount of vectorized, fast math for each pixel), but you can grab an alpha channel for free even when working in colored spaces, allowing you to near-perfectly segment the background out from an image.

The UX is also dead-simple. If a human operator doesn't like the results, they just click around the image to refine the segmentation. They can then apply directly to a batch of images, or if each image might need some refinement then there are straightforward solutions for allowing most of the learned information to transfer from one image to the next, requiring much less operator input for the rest of the batch.

As an added plus, it also works well even for gridlines and other stranger backgrounds, still without needing any fancy algorithms.