Remix.run Logo
aakresearch 8 hours ago

I am in agreement with many commenters here (https://news.ycombinator.com/item?id=47158240, https://news.ycombinator.com/item?id=47158573 and others) that this article is a clear illustration of failure on part of AI to capture the structure of material in a useful way. As addressed in the article, the effect is very visible in visual space, 3D modeling. I would argue it is very much present in LLM space too, just less prominent due to certain properties of the medium - text-based language. I also believe the effect is fundamental, rooted in the design of those models.

I'll leave here the note I've written down recently, while thinking about this fundamental limitation.

- The relationship between sentient/human thinking and its expression ("language") is similar to the one between abstract/"vector" image specification and its rendered form (which is necessarily pixel-based/rasterised)

- "Truly reasoning" system operates in the abstract/"vector" space, only "rendering" into "raster" space for communication purposes. Today's LLMs, by their natural design, operate entirely in the "raster" space of (linguistic) "tokens". But from outside point of view the two are indistiguishable, superficially.

- Today's LLMs is a brute force mechanism, made possible by availability of sheer computing power and ample training material.

- The whole premise of LLMs ("Large" and "Language" being load-bearing words here) is that they completely bypass the need to formalize the "vector" part, conceptualize in useful manner. I call it "raster-vector impedance".

- Even if not formalized, it can be said that internal "structures" that form within LLM somehow encode/capture ("isomorphic to" is the word I like to use) the semantics ("vector"). I believe the same can be said about "computer vision" ML systems which learn to classify images after being fed billions of them.

- However, I believe that, by nature, such internal encoding is necessarily incomplete and maybe even incorrect.

- Despite the above, LLM can still be a useful tool in many domains. I think language translation is a task that can be very successfully performed without necessarily "decoding" the emerging underlying structures. I.e. a sentence in source language can be mapped onto a region of latent space; an isomorphic region of latent space based on target language can be used to produce an output in the target language which will be representative of an equivalent meaning, from human perspective. All without explicit conceptual decoding of underlying token weight matrices. "Black-box" translation, so to speak. I am amazed (and disturbed, and horrified too!) that producing a viable code in a programming language from casual natural language prompt turned out to be a subset of general translation task, largely. Well, at least on lower levels.

- To me it is intuitive that such design (brute-force transforms of "rasterized" data instead of explicitly conceptualizing it into "vector" forms) is very limited and, essentially, a dead-end.