Remix.run Logo
trick-or-treat 5 hours ago

> The verifier taught AlphaGo that move

Ok so it sounds like you want to give the rules of Go credit for that move, lol.

wobfan 4 hours ago | parent | next [-]

It feels like you're purposefully ignoring the logical points OP gives and you just really really want to anthropomorphize AlphaGo and make us appreciate how smart it (should I say he/she?) is ... while no one is even criticising the model's capabilities, but analyzing it.

trick-or-treat 3 hours ago | parent | next [-]

Can you back that up with some logic for me?

I don't really play Go but I play chess, and it seems to me that most of what humans consider creativity in GM level play comes not in prep (studying opening lines/training) but in novel lines in real games (at inference time?). But that creativity absolutely comes from recalling patterns, which is exactly what OP criticizes as not creative(?!)

I guess I'm just having trouble finding a way to move the goalpost away from artificial creativity that doesn't also move it away from human creativity?

datsci_est_2015 an hour ago | parent [-]

How a model is trained is different than how a model is constructed. A model’s construction defines its fundamental limitations, e.g. a linear regressor will never be able to provide meaningful inference on exponential data. Depending on how you train it, though, you can get such a model to provide acceptable results in some scenarios.

Mixing the two (training and construction) is rhetorically convenient (anthropomorphization), but holds us back in critically assessing a model’s capabilities.

hackinthebochs 18 minutes ago | parent [-]

Linear regression has well characterized mathematical properties. But we don't know the computational limits of stacked transformers. And so declaring what LLMs can't do is wildly premature.

famouswaffles 4 hours ago | parent | prev [-]

[dead]

5 hours ago | parent | prev [-]
[deleted]