| ▲ | what-the-grump 14 hours ago | |
Build a rag with significant amount of text, extract it by key word topic, place, date, name, etc. … realize that it’s nonsense and the LLM is not smart enough to figure out much without a reranker and a ton of technology that tells it what to do with the data. You can run any vector query against a rag and you are guaranteed a response. With chunks that are unrelated any way. | ||
| ▲ | electroglyph 10 hours ago | parent [-] | |
unrelated in any way? that's not normal. have you tested the model to make sure you have sane output? unless you're using sentence-transformers (which is pretty foolproof) you have to be careful about how you pool the raw output vectors | ||