| ▲ | barbazoo 4 hours ago | |||||||
> What that means is that when you're looking to build a fully local RAG setup, you'll need to substitute whatever SaaS providers you're using for a local option for each of those components. Even starting with having "just" the documents and vector db locally is a huge first step and much more doable than going with a local LLM at the same time. I don't know any one or any org that has the resources to run their own LLM at scale. | ||||||||
| ▲ | mips_avatar 3 hours ago | parent | next [-] | |||||||
It’s also just extremely viable to just host your own vector db. You just need a server with enough ram for your hnsw index. | ||||||||
| ▲ | procaryote 3 hours ago | parent | prev [-] | |||||||
Aren't there a bunch of models that run OK on consumer hardware now? | ||||||||
| ||||||||