▲ | Lerc a day ago | |
I don't understand why people can't handle metaphors to explain things in AI so much. The same terms exist in other fields. Physics has things that want to go to a lower energy level, the ball wants to fall but the table is holding it up. Electrons don't like being near each other, The hugs boson puts on little bunny ears and goes around giving mass to all the other good particles. None of these are said in any way as a suggestion that these things have any form of intention. They also don't in AI. When scientists really think those abilities are there in a provable way (or even if they suspect), I can assure you that they will be prepared to make it crystal clear that this is what they are claiming. Critisising the use of metaphor is kind of a pre-emptive attack against claims that might be made in the future. Some AI scientists believe that there is a degree of awareness in recent models. They may be right or wrong but the ones who believe this are outright saying so. I'm also inclined, if you'll excuse the term, to be critical of anything suggesting the assumption of smooth progress when they declare something to be the first step. Steps are not smooth. That's a good example of ignoring the what of the metaphor. I don't really know what to make about the embodiment position, it feels like it's trying to hide dualism behind a practical limitation. Once you start drilling down into the why/why not and what do you mean by that, I wouldn't be at all surprised to see the expectation that you can't train an AI because it doesn't have a soul I agree with xkcd 1425 though. | ||
▲ | ACCount37 a day ago | parent [-] | |
It's AI effect let loose. A lot of people really, really don't want LLMs to be "actually intelligent", so they oppose any use of any remotely "anthropomorphic" terms in application to LLMs on that principle alone. IMO, anthropomorphizing LLMs is at least directionally correct in 9 cases out of 10. |