▲ | EGreg 4 days ago | |
Why are some people always trying to defend LLMs and say either “humans are also like this” or “this has always been a problem even before AIs” Listen, LLMs are different than humans. They are modeling things. Most RLHF makes them try to make sense of whatever you’re saying as much as you can. So they’re not going to disregard cats, OK? You can train LLMs to be extremely unhuman-like. Why anthropomorphize them? | ||
▲ | thethirdone 4 days ago | parent | next [-] | |
There is a long history of people thinking humans are special and better than animals / technology. For animals, people actually thought animals can't feel pain and did not even consider the ways in which they might be cognitively ahead of humans. Technology often follows the path from "working, but worse than a manual alternative" to "significantly better than any previous alternative" despite naysayers saying that beating the manual alternative is literally impossible. LLMs are different from humans, but they also reason and make mistakes in the most human way of any technology I am aware of. Asking yourself the question "how would a human respond to this prompt if they had to type it out without ever going back to edit it?" seems very effective to me. Sometimes thinking about LLMs (as a model / with a focus on how they are trained) explains behavior, but the anthropomorphism seems like it is more effective at actually predicting behavior. | ||
▲ | qcnguy 3 days ago | parent | prev | next [-] | |
It's because most use cases for AI involve replacing people. So if a person would suffer a problem and an AI does too it doesn't matter, it would just be a Nirvana fallacy to refuse the AI because it has the same problems as the previous people did. | ||
▲ | nijave 4 days ago | parent | prev [-] | |
I suppose there's a desire to know just how Artificial the Intelligence is Human vs machine has a long history |