▲ | rafram 6 days ago | |||||||
LLM/VLM-based OCR is highly prone to hallucination - the model does not know when it can’t read a text, it can’t estimate its own confidence, and it deals with fuzzy/unclear texts by simply making things up. I would be very nervous using it for anything critical. | ||||||||
▲ | paulsutter 6 days ago | parent [-] | |||||||
There are really amazing products coming | ||||||||
|