▲ | Is there a way to run an LLM as a better local search engine? | ||||||||||||||||||||||
8 points by oblio 4 days ago | 19 comments | |||||||||||||||||||||||
Basically, I was thinking that a way I could actually use LLMs would be to point them at my hard drive, with hundreds of images, PDFs, XLS' and other random files, and start asking it questions to easily find things in there. Can a local LLM run OCR software on its own? I'm on Windows, if it matters. Is there anything like that out there, already (mostly) built? | |||||||||||||||||||||||
▲ | didgetmaster 3 days ago | parent | next [-] | ||||||||||||||||||||||
Local hard drive are big enough these days to hold hundreds of millions of files. That means that you could have many millions of each type of file (documents, pictures, spreadsheets, videos, etc.) We need better ways to properly classify all that data with accurate metadata and quick ways to pick out a small subset of data for A.I. to analyze. Databases were designed to do this with relational tables (e.g. find all customers from New York who bought our product in 2024); but file systems were not designed to do this with files (e.g. find all pictures I took with my phone in 2020). A.I. can be a great tool to find patterns in files to answer important questions, but it will be incredibly slow if it has to analyze too many files for every query. | |||||||||||||||||||||||
| |||||||||||||||||||||||
▲ | Agraillo 4 days ago | parent | prev | next [-] | ||||||||||||||||||||||
What you described would be a great solution for plenty of tasks, but maybe solving some fallacies one at a time would also be great. For example, when we're sure by placing a properly named file at a directory location, we will find it by recalling the folder name or the name of the file itself while in reality we're often surprised that after months or years this won't work, the expected path to the file either not exists or doesn't contain what we're looking for. The same fallacy is true also for different hierarchical notes organizers. In this case LLMs with their ability to find semantic equivalence might be a great help. And with the current state of affairs I even think that an LLM with a sufficiently large context window might absorb some kind of the file system dump with directory paths and file names and answer a question about some obscure file from the past. | |||||||||||||||||||||||
| |||||||||||||||||||||||
▲ | android521 a day ago | parent | prev | next [-] | ||||||||||||||||||||||
first , have a good file search system. Then LLM is just the middle that translate human language into commands to search the files and then translate the result back in human langauge. | |||||||||||||||||||||||
▲ | DHRicoF 4 days ago | parent | prev | next [-] | ||||||||||||||||||||||
You need to provide more information. Is your data organized or is just a dump of unrelated content? - If you have a bag of files without any metadata the best option is to create something like a RAG, with a pre OCR step for image files (or even some multimodal model call). - If the content is well organized with a logic structure an agent could extract information with a little look around. Is static or varies day by day? - If is static you could index all at once, if not, an agent that pick what to reindex would be a better call. I'm not aware of a solution like this, but seems doable as an MCP server. But the cost will scale quiclky. | |||||||||||||||||||||||
| |||||||||||||||||||||||
▲ | msgodel 4 days ago | parent | prev | next [-] | ||||||||||||||||||||||
Multimodal Qwen is pretty good at OCR although it's pretty slow without a GPU. For pure search you're almost certainly better off building an index of CLIP embeddings and then doing cosine similarity with a query embedding to find things. I have gigabytes of reaction images and memes I've been thinking about doing this with. | |||||||||||||||||||||||
▲ | maxcomperatore 4 days ago | parent | prev | next [-] | ||||||||||||||||||||||
i been working on something like this for my own stuff. my drive got screenshots pdfs md files invoices and random logs and i always forget what i named stuff from years ago what helped me was - ran ocr on images with tesseract (slow but it works) - used unstructured and langchain to parse and chunk stuff even spreadsheets and emails - embedded chunks with sentence-transformers and indexed it with faiss - then built a local llm agent (used a quantized mistral model) to rerank results smartly its rough but works like a semantic grep for your whole disk if you want less diy paperless-ng plus anythingllm plus a lightweight embed model could work or wait some months and someone will wrap it all in an electron app with stripe on the homepage lol funny how much time we spend trying to find stuff we already wrote | |||||||||||||||||||||||
▲ | aliasmaya 4 days ago | parent | prev | next [-] | ||||||||||||||||||||||
Seems that you're looking for a RAG System, and you may have a try RAGFlow | |||||||||||||||||||||||
| |||||||||||||||||||||||
▲ | Iolaum 4 days ago | parent | prev | next [-] | ||||||||||||||||||||||
Have you tried asking this question at an LLM? :p | |||||||||||||||||||||||
| |||||||||||||||||||||||
▲ | cranberryturkey 4 days ago | parent | prev [-] | ||||||||||||||||||||||
local as in on your filesystem? | |||||||||||||||||||||||
|