| ▲ | thefourthchime 6 hours ago | |
As a serial DIYer, I respect the engineering depth here, especially the custom vector index, but I disagree on the self-hosted ML approach. The innovation in embeddings is just too fast to keep up with locally without constant refactoring. You can actually see the trade-off in the "girl drinking water" example where one result is a clear hallucination. | ||
| ▲ | warangal 6 hours ago | parent [-] | |
Currently (Semantic) ML model is the weakest (minorly fine-tuned) ViT B/32 variant, and more like acting as a placeholder i.e very easy to swap with a desired model. (DINO models have been pretty great, being trained on much cleaner and larger Dataset, CLIP was one of first of Image-text type models !). For point about "girl drinking water", "girl" is the person/tagged name , "drinking water" is just re-ranking all of "girl"s photos ! (Rather than finding all photos of a (generic) girl drinking water) . I have been more focussed on making indexing pipeline more peformant by reducing copies, speeding up bottleneck portions by writing in Nim. Fusion of semantic features with meta-data is more interesting and challenging part, in comparison to choosing an embedding model ! | ||