| ▲ | vincenthwt 2 days ago | ||||||||||||||||||||||
It really depends on the application. If the illumination is consistent, such as in many machine vision tasks, traditional thresholding is often the better choice. It’s straightforward, debuggable, and produces consistent, predictable results. On the other hand, in more complex and unpredictable scenes with variable lighting, textures, or object sizes, AI-based thresholding can perform better. That said, I still prefer traditional thresholding in controlled environments because the algorithm is understandable and transparent. Debugging issues in AI systems can be challenging due to their "black box" nature. If the AI fails, you might need to analyze the model, adjust training data, or retrain, a process that is neither simple nor guaranteed to succeed. Traditional methods, however, allow for more direct tuning and certainty in their behavior. For consistent, explainable results in controlled settings, they are often the better option. | |||||||||||||||||||||||
| ▲ | shash 2 days ago | parent [-] | ||||||||||||||||||||||
Not to mention performance. So often, the traditional method is the only thing that can keep up with performance requirements without needing massive hardware upgrades. Counter intuitively, I’ve often found that CNNs are worse at thresholding in many circumstances than a simple otsu or adaptive threshold. My usual technique is to use the least complex algorithm and work my way up the ladder only when needed. | |||||||||||||||||||||||
| |||||||||||||||||||||||