| ▲ | wongarsu 5 hours ago | |
I just tried it on qwen3-embedding:8b with a little vibe-coded 100 line script that does the obvious linear math and compares the result to the embeddings of a couple of candidate words using cosine similarity, and it did prefer the expected words. Same 22 candidates for both questions king - man + woman ≈ queen (0.8510)
Berlin - Germany + France ≈ Paris (0.8786)
Sure, 0.85 is not an exact match so things are not exactly linear, and if I dump an entire dictionary in there it might be worse, but the idea very much worksEdit: after running a 100k wordlist through qwen3-embedding:0.6b, the closest matches are:
So clearly throwing a dictionary at it doesn't break it, the closest matches are still the expected ones. The next closest matches got a lot more interesting too, for example the four closest matches for london – england + france are (in order) paris, strasbourg, bordeaux, marseilles | ||