▲ | visarga a day ago | ||||||||||||||||||||||||||||||||||
You have no idea how much personal work went behind it. You just suspect it was worded with a LLM. I have been using embeddings for almost a decade and am well versed with their intricacies. I think this article has merit. The direction of the investigation and the conclusion are interesting, good to have people thinking about how many distinct concepts can be packed in our usual embedding dimension. Wondering how small can you make embedding before a model becomes noticeably worse, given constant parameter count. | |||||||||||||||||||||||||||||||||||
▲ | dingnuts a day ago | parent [-] | ||||||||||||||||||||||||||||||||||
The complaint was that the post has a lot of basic inconsistencies which is a problem regardless. If your content is as bad as AI slop it doesn't really matter if it is or not, but I think it's safe to assume that when a verbose and grandiose post is internally inconsistent and was written after 2022, it's slop[0] 0 https://pyxis.nymag.com/v1/imgs/f0e/0bb/d9346e02d8d7173a6a9d... | |||||||||||||||||||||||||||||||||||
|