Remix.run Logo
qaq 4 days ago

"I recently spoke with the CTO of a popular AI note-taking app who told me something surprising: they spend twice as much on vector search as they do on OpenAI API calls. Think about that for a second. Running the retrieval layer costs them more than paying for the LLM itself. That flips the usual assumption on its head." Hmm well start sending full documents as part of context see it flip back :).

heywoods 4 days ago | parent | next [-]

Egress costs? I’m really surprised by this. Thanks for sharing.

qaq 4 days ago | parent | next [-]

Sry maybe should've being more clear it was a sarcastic remark. The whole point of doing vector db search is to feed LLM with very targeted context so you can save $ on API calls to LLM.

heywoods 2 days ago | parent | next [-]

No worries. I should probably make sure I have at least a token understanding of the topic cloud based architecture before commenting next time haha.

infecto 4 days ago | parent | prev [-]

That’s not the whole point it’s in the intersection of reducing tokens sent but also getting search both specific and generic enough to capture the correct context data.

j45 3 days ago | parent [-]

It's possible to create linking documents between the documents to help smooth out things in some cases.

andreasgl 3 days ago | parent | prev [-]

They’re likely using an HNSW index, which typically requires a lot of memory for large data sets.

dahcryn 3 days ago | parent | prev | next [-]

if they use AzureSearch, I fully understand it. Those things are hella expensive

4 days ago | parent | prev [-]
[deleted]