Remix.run Logo
menaerus 3 hours ago

I thought the OCR was one of the obvious examples where we have a classical technology that is already working very well but in the long-run I don't see it surviving. _Generic_ AI models already can do the OCR kinda good but they are not even trained for that purpose, it's almost incidental - they've never been trained to extract the, let's say name/surname from some sort of a document with a completely unfamiliar structure, but the crazy thing is that it does work somehow! I think that once somebody finetunes the AI model only for this purpose I think there's a good chance it will outperform classical approach in terms of precision and scalability.

srean 3 hours ago | parent [-]

In general I agree. For OCR I agree vehemently. Part of the reason is the structure of the solution (convolutions) match the space so well.

The failure cases are those where AI solutions have to stay in a continuous debug, train, update mode. Then you have to think about the resources you need, both in terms of people as well as compute to maintain such a solution.

Because of the way the world works, it's endemic nonstationarity, the debug-retrain-update is a common state of affairs even in traditional stats and ML.

menaerus 2 hours ago | parent [-]

I see. Let's take another example here, I hope I understood you - imagine you have an AI model which is connected to all of your company's in-house data generation sources such as wiki, chat, jira, emails, merge requests, excel sheets, etc. Basically everything that can be deemed useful to query or to create business inteligence on top of. These data sources are continously generating more and more data every day, and given their nature they are more or less unstructured.

Yet, we have such systems in place where we don't have to retrain the model with ever-growing data. This is one example I could think of but it kinda suggests that models, at least for some purposes, don't have to be retrained continuously to keep them running well.

I also use a technique of explaining something to the AI model he has not seen before (according to the wrong answer I got from it previously), and it manages to evolve the steps, whatever they are, so that it gives me the correct answer in the end. This also suggests that capacity of the models is larger than what they have been trained on.

srean 2 hours ago | parent [-]

Data science solutions are different in the sense they rarely ever get done and dusted in a sense a sorting library might.

There's almost always something or the other breaking. Did the nature of data change. Did my upstream data feed change. Why are these small set of examples not working for this high paying customer.

You would need resources to understand and fix these problems quarter after quarter.

A rich network of data dependencies can be a double edged sword. Rarely are upstream code and data changes benign to the output of the layer you own.

There are two cases where AI solutions are perfect. They are so good that they are fire and forget. The second is that your customer is a farmer not a gardener. Individual failing saplings mean little to him.

If a single misbehaving plant can cause commercially significant damage then when choosing opaque tools you must consider the maintenance cost you may be signing up for.

Say I have a ton of historical data that is being continuously added to. It's a real temptation to replace the raw data with a model that uses less number of parameters than the raw data. In a sense lossy compression. Can be a very bad idea. Data instances where the model does not fit well may be the most important pieces of art information. Model paints with a broad brush stroke. If you are hunting faults, you have been aware that a lossy compression can paper them away. You are also potentially harming a future model that could have been trained but you have thrown away a decade of useful data because storage costs were running so high.

No easy solution. General recommendation would be to compress but losslessly simply because you know not what may be valuable in the future. If it's impossible, then so be it, you have to eat that opportunity cost in the future, but you did your best.