▲ | roughly 7 days ago | |||||||||||||
Like a lot of the research Anthropic has done, this and the “emergent misalignment” research they link to put more points in the “stochastic parrot” hypothesis column. The reason these LLM behaviors read as so weird to us is that we’re still anthropomorphizing the hell out of these systems - they can create very convincing dialogue, and the depth of the model suggests some surprising complexity, but the reason why, eg, a random string of numbers will induce changes elsewhere in the model is there’s simply nothing in the model to Be consistent. It is an extremely complex autocomplete algorithm that does a very effective cosplay of an “intelligent agent.” My suspicion is that when we eventually find our way to AGI, these types of models will be a _component_ of those systems, but they lack some fundamental structuring that seems to be required to create anything like consistency or self-reflection. (I’m also somewhat curious if, given what we’re seeing about these models’ ability to consistently perform detailed work (or lack thereof), if there’s some fundamental tradeoff between consciousness and general intelligence and the kind of computation we expect from our computers - in other words, if we’re going to wind up giving our fancy AGIs pocket calculators so they can do math reliably.) | ||||||||||||||
▲ | mitjam 7 days ago | parent | next [-] | |||||||||||||
> they lack some fundamental structuring that seems to be required to create anything like consistency or self-reflection A valid observation. Interestingly, feeding the persona vectors detected during inference back into the context might be a novel way of self-reflection for LLMs. | ||||||||||||||
| ||||||||||||||
▲ | gedy 7 days ago | parent | prev | next [-] | |||||||||||||
> My suspicion is that when we eventually find our way to AGI, these types of models will be a _component_ of those systems I think this is a good summary of the situation, and strikes a balance between the breathless hype and the sneering comments about “AI slop“. These technologies are amazing! And I do think they are facsimiles of parts of the human mind. (Image diffusion is certainly similar to human dreams in my opinion), but still feels like we are missing an overall intelligence or coordination in this tech for the present. | ||||||||||||||
| ||||||||||||||
▲ | 7 days ago | parent | prev [-] | |||||||||||||
[deleted] |