| ▲ | geoffbp 5 hours ago | |
I dug into this a bit (with AI ofc) and it spat this out. I found it an easy way to visualise and start to understand: > Standard AI models (like GPT-4) treat data using Global Geometry. They imagine every word as a point floating in a massive, flat, high-dimensional room. To see how two words relate, they draw a straight line between them. > Local Topology changes the "room" into a landscape (a manifold). Instead of a flat void, the data exists on a curved surface that has hills, valleys, and paths. | ||
| ▲ | xtiansimon 5 hours ago | parent [-] | |
What is a "high-dimensional room"? A "room" is by definition three-dimensional in so far as we're using metaphor for description. Then to add this "high-dimensional" modifier does little for me, since the only visualizable high-dimensional cube is a tesseract, which still leaves you at 4-d. The presented counterpoint to this metaphor has the "room" change into a "landscape". The room is a "flat void" compared to a landscape with "hills, valleys, and paths". None of these landscape features evoke higher dimensionality in my imagination. Certainly not in the way, say, the metaphor of the "coastline" of Great Britain does when discussing the unusual properties of a fractal. These moves don't shift my railroad mind from one track onto another. So I wonder, if a metaphoric usage is not in some way universal, how can it be instructive? | ||