Remix.run Logo
lbrandy 7 days ago

> has a model much as we humans do

The premise that an AI needs to do Y "as we do" to be good at X because humans use Y to be good at X needs closer examination. This presumption seems to be omnipresent in these conversations and I find it so strange. Alpha Zero doesn't model chess "the way we do".

klabb3 6 days ago | parent | next [-]

Both that, and that we should not expect LLMs to achieve ability with humans as baseline comparison. It’s as if cars were rapidly getting better due to some new innovation, and expecting them to fly within a year. It’s a new, and different thing, where the universality of ”plausibly sounding” coherent text appeared to be general, when it’s advanced pattern matching. Nothing wrong with that, pattern matching is extremely useful, but drawing the equal sign to human cognition is extremely premature, and a bet that is very likely be wrong.

shkkmo 6 days ago | parent | prev [-]

Alpha Zero is not trying to be AGI.

> The premise that an AI needs to do Y "as we do" to be good at X because humans use Y to be good at X needs closer examination.

I don't see it being used as a premise. It see it as speculation that is trying to understand why this type of AI underperforms at certain types of tasks. Y may not be necessary to do X well, but if a system is doing X poorly and the difference between that system and another system seems to be Y, it's worth exploring if adding Y would improve the performance.