▲ | fmbb 5 days ago | |
Of course you can automate ”having fun” and ”being entertained”. That is if you believe humanity will ever build artificial intelligence. | ||
▲ | the_af 5 days ago | parent | next [-] | |
> Of course you can automate ”having fun” and ”being entertained” This seems like begging the question to me. I don't think there's a mechanistic (as in "token predictor") procedure to generate the emotions of having fun, or being surprised, or amazed. It's not on me to demonstrate it cannot be done, it's on them to demonstrate it can. But to be clear, I don't think the author of TFA is making this claim either. They are simply approaching IF games from a "problem solving" perspective -- they don't claim this has anything to do with fun or AGI -- and what I'm arguing is that this mechanistic approach to IF games, i.e. "problem solving", only touches on a small subset of what makes people want to play these games. They are often (not all, as the author rightly corrects me, but often) about generating surprise and amazement in the player, something that cannot be done to an LLM. (Note I'm also not dismissing the author's experiment. As an experiment it's interesting and, I'd argue, fun for the author). Current, state of the art LLMs cannot feel amazement, or nothing else really (and, I argue, no LLM in the current tech branch will ever can). I hope this isn't a controversial statement. | ||
▲ | drdeca 5 days ago | parent | prev [-] | |
A p-zombie would not have fun or be entertained, only act like it does. I don’t think AGI requires being unlike a p-zombie in this way. |