Remix.run Logo
infecto 7 days ago

Multimodal LLM are not the way to do this for a business workflow yet.

In my experience your much better of starting with a Azure Doc Intelligence or AWS Textract to first get the structure of the document (PDF). These tools are incredibly robust and do a great job with most of the common cases you can throw at it. From there you can use an LLM to interrogate and structure the data to your hearts delight.

disgruntledphd2 7 days ago | parent | next [-]

> AWS Textract to first get the structure of the document (PDF). These tools are incredibly robust and do a great job with most of the common cases you can throw at it.

Do they work for Bills of Lading yet? When I tested a sample of these bills a few years back (2022 I think), the results were not good at all. But I honestly wouldn't be surprised if they'd massively improved lately.

infecto 7 days ago | parent [-]

Have not used in on your docs but I can say that it definitely works well with forms and forms with tables like a Bill of Lading. It costs extra but you need to turn on table extract (at least in AWS). You then can get a markdown representation of that page include table, you can of course pull out the table itself but unless its standardized you will need the middleman LLM figuring out the exact data/structure you are looking for.

disgruntledphd2 5 days ago | parent [-]

Huh, interesting. I'll have to try again next time I need to parse stuff like this.

IndieCoder 7 days ago | parent | prev [-]

Plus one, using the exact setup to make it scale. If Azure Doc Intelligence gets too expensive, VLMs also work great

vinothgopi 7 days ago | parent [-]

What is a VLM?

saharhash 7 days ago | parent [-]

Vision Language Model like Qwen VL https://github.com/QwenLM/Qwen2-VL or CoPali https://huggingface.co/blog/manu/colpali

sidmo 5 days ago | parent [-]

VLMs are cool - they generate embeddings of the images themselves (as a collection of patches) and you can see query matching displayed as a heatmap over the document. Picks up text that OCR misses. Here's an open-source API demo I built if you want to try it out: https://github.com/DataFog/vlm-api