▲ | keeeba 5 days ago | |
What use-cases do you see for the 270M’s embeddings, and should we be sticking to token embeddings or can we meaningfully pool for sentence/document embeddings? Do we need to fine-tune for the embeddings to be meaningful at the sentence/document level? |