Remix.run Logo
cheschire 10 hours ago

I think at a certain scale we're talking about switching to local trained models which don't have the same operating costs as running a frontier model for OCR. That would reduce the ongoing costs significantly. Might take longer than 30 seconds to read each receipt if you run multiple passes to ensure accuracy, but could run 24/7/365 without the same tax and administration overhead of humans.

Spherical cows aside though, I do agree with you that I should not consider scalability as a given.

wolfram74 9 hours ago | parent [-]

I suppose if we had access to a public data set like this receipt bank, programmers could time themselves setting up a solution with off the shelf OCR algos. If they could clock in at under 10 hours they could advertise themselves as being "just as good as an LLM, but significantly cheaper." Downside for the managerial class that wants generative algos for the complete lack of legal protections.