| ▲ | menaerus 3 hours ago | ||||||||||||||||
I thought the OCR was one of the obvious examples where we have a classical technology that is already working very well but in the long-run I don't see it surviving. _Generic_ AI models already can do the OCR kinda good but they are not even trained for that purpose, it's almost incidental - they've never been trained to extract the, let's say name/surname from some sort of a document with a completely unfamiliar structure, but the crazy thing is that it does work somehow! I think that once somebody finetunes the AI model only for this purpose I think there's a good chance it will outperform classical approach in terms of precision and scalability. | |||||||||||||||||
| ▲ | srean 3 hours ago | parent [-] | ||||||||||||||||
In general I agree. For OCR I agree vehemently. Part of the reason is the structure of the solution (convolutions) match the space so well. The failure cases are those where AI solutions have to stay in a continuous debug, train, update mode. Then you have to think about the resources you need, both in terms of people as well as compute to maintain such a solution. Because of the way the world works, it's endemic nonstationarity, the debug-retrain-update is a common state of affairs even in traditional stats and ML. | |||||||||||||||||
| |||||||||||||||||