| ▲ | srean 3 hours ago | |||||||
In general I agree. For OCR I agree vehemently. Part of the reason is the structure of the solution (convolutions) match the space so well. The failure cases are those where AI solutions have to stay in a continuous debug, train, update mode. Then you have to think about the resources you need, both in terms of people as well as compute to maintain such a solution. Because of the way the world works, it's endemic nonstationarity, the debug-retrain-update is a common state of affairs even in traditional stats and ML. | ||||||||
| ▲ | menaerus 2 hours ago | parent [-] | |||||||
I see. Let's take another example here, I hope I understood you - imagine you have an AI model which is connected to all of your company's in-house data generation sources such as wiki, chat, jira, emails, merge requests, excel sheets, etc. Basically everything that can be deemed useful to query or to create business inteligence on top of. These data sources are continously generating more and more data every day, and given their nature they are more or less unstructured. Yet, we have such systems in place where we don't have to retrain the model with ever-growing data. This is one example I could think of but it kinda suggests that models, at least for some purposes, don't have to be retrained continuously to keep them running well. I also use a technique of explaining something to the AI model he has not seen before (according to the wrong answer I got from it previously), and it manages to evolve the steps, whatever they are, so that it gives me the correct answer in the end. This also suggests that capacity of the models is larger than what they have been trained on. | ||||||||
| ||||||||