Remix.run Logo
sigbottle 4 hours ago

About the blog you linked and not your comment:

Doesn't symbolic AI have a lot of philosophical problems? Think back to Quine's two dogmas - you can't just say, "Let's understand the true meanings of these words and understand the proper mappings". There is no such thing as fixed meaning. I don't see how you get around that.

Deep learning is admittedly an ugly solution, but it works better than symbolic AI at least.

paroneayea 3 hours ago | parent | next [-]

Yes! But it's still valuable. How am I understanding your argument at all?

I think my friend Jonathan Rees put it best:

  "Language is a continuous reverse engineering effort, where both sides are trying to figure out what the other side means."
More on that: https://dustycloud.org/blog/identity-is-a-katamari/

This reverse engineering effort is important between you and me, in this exchange right here. It is a battle that can never be won, but the fight of it is how we make progress in most things.

sigbottle 3 hours ago | parent [-]

I mean, Quine invented (the term) holism. I don't think we're on different pages. Maybe I should've specified a bit more what I was getting at.

This has very specific implications in symbolic ai specifically where historically the goal was mapping out the 'correct' representation of the space, then running formal analysis over it. That's why it's not a black box - you can trace out all of the steps. The issue is, is that symbolic AI just doesn't work. To my knowledge, as compared to all the DL wins we have.

I think the win of transformers proves that symbolic AI isn't the way. At the very least, the complex interactions that arise from in-context learning clearly in no way imply some fixed universal meaning for words, which is a big problem for symbolic AI.

Exoristos 3 hours ago | parent | prev [-]

> There is no such thing as fixed meaning.

Meaning is more fixed than it is not.