▲ | twotwotwo a day ago | |
Sort of trivial but fun thing: you can fit billions of concepts into this much space, too. Let's say four bits of each component of the vector are important, going by how some providers do fp4 inference and it isn't entirely falling apart. So an fp4 dimension-12K vector takes up 6KB, like a few pages of UTF-8 text, more compressed text, or 3K tokens in a 64K-token embedding. How many possible multi-page 'thought's are there? A lot! (And in handling one token, the layers give ~60 chances to mix in previous 'thoughts' via the attention mechanism, and mix in stuff from training via the FFNs! You can start to see how this whole thing ends able to convert your Bash to Python or do word problems.) Of course, you don't expect it to be 100% space-efficient, detailed mathematical arguments aside. You want blending two vectors with different strengths to work well, and I wouldn't expect the training to settle into the absolute most efficient way to pack the RAM available. But even if you think of this as an upper bound, it's a very different reference point for what 'ought' to be theoretically possible to cram into a bunch of high-dimensional vectors. |