| ▲ | maciejzj 4 hours ago | |
AFAIK – the input is (at most basic level) a matrix with L tokens (rows) and d embedding length (cols). The input tokens are initially coded into discrete IDs but they are turned into embeddings by something like `torch.nn.Embedding`. The embedding layer can be thought of as a "lookup table" but it is matrix multiplication learned through gradient descent (adjusted at train time, fixed values at inference time). The length of embedding (d) is also fixed, L is not. If you check out the matrix multiplication formulas for both embedding layer and attention you will notice that they work for any number of rows/tokens/L (linear algebra and rules of matrix multiplication). The context limit is imposed by auxiliary factors – positional encoding and overall model ability to produce coherent output for very long input. When it comes down to the meaning of "bank" embedding, it cannot be interpreted directly, however, you can run statistical analysis on embeddings (like PCA). If we were to say, the embedding for "bank" contains all possible meaning of this word, the particular one is inferred not by the embedding layer, but via later attention operations that associate this particular token with the other tokens in the sequence (e.g. self attention). | ||
| ▲ | gushogg-blake 34 minutes ago | parent [-] | |
This is exactly what I was looking for, thanks! | ||