▲ | qaq 4 days ago | |||||||
Sry maybe should've being more clear it was a sarcastic remark. The whole point of doing vector db search is to feed LLM with very targeted context so you can save $ on API calls to LLM. | ||||||||
▲ | heywoods 2 days ago | parent | next [-] | |||||||
No worries. I should probably make sure I have at least a token understanding of the topic cloud based architecture before commenting next time haha. | ||||||||
▲ | infecto 4 days ago | parent | prev [-] | |||||||
That’s not the whole point it’s in the intersection of reducing tokens sent but also getting search both specific and generic enough to capture the correct context data. | ||||||||
|