All LLMs use embeddings, it's just for embeddings models they stop there, while for a full chat/completion model that's only the first step of the process. Embeddings are coordinates in the latent space of the transformer.