▲ | prmph a day ago | |||||||
Agreed, if you relax the requirement for perfect orthogonality, then, yes, you can pack in much more info. You basically introduced additional (fractional) dimensions clustered with the main dimensions. Put another way, many concepts are not orthogonal, but have some commonality or correlation. So nothing earth shattering here. The article is also filled with words like "remarkable", "fascinating", "profound", etc. that make me feel like some level of subliminal manipulation is going on. Maybe some use of an LLM? | ||||||||
▲ | gpjanik a day ago | parent [-] | |||||||
It's... really not what I meant. This requirement does not have to be relaxed, it doesn't exist at all. Semantic similarity in embedding space is a convenient accident, not a design constraint. The model's real "understanding" emerges from the full forward pass, not the embedding geometry. | ||||||||
|