Remix.run Logo
numlocked 4 days ago

I hear you, but the article is talking specifically about "embeddings as a product" -- not the embeddings that are within an LLM architecture. It starts:

> As a quick review, embeddings are compressed numerical representations of a variety of features (text, images, audio) that we can use for machine learning tasks like search, recommendations, RAG, and classification.

Current standalone embedding models are not intrinsically connected to SotA LLM architectures (e.g. the Qwen reference) -- right? The article seems to mix the two ideas together.