| ▲ | ordinarily 7 hours ago | ||||||||||||||||
Doesn't seem that surprising or terrifying to me. Humans come equipped with a lot more internal biases (learned in a fairly similar fashion), and they're usually a lot more resistant to getting rid of them. The truly terrifying stuff never makes it out of the RLHF NDAs. | |||||||||||||||||
| ▲ | Terr_ 7 hours ago | parent | next [-] | ||||||||||||||||
We ought to be terrified, when one adjusts for ll the use-cases people are talking about using these algorithms in. (Even if they ultimately back off, it's a lot of frothy bubble opportunity cost.) There a great many things people do which are not acceptable in our machines. Ex: I would not be comfortable flying on any airplane where the autopilot "just zones-out sometimes", even though it's a dysfunction also seen in people. | |||||||||||||||||
| |||||||||||||||||
| ▲ | agnishom 7 hours ago | parent | prev [-] | ||||||||||||||||
Humans also take a lot of time in producing output, and do not feed into a crazy accelerationistic feedback loop (most of the time). | |||||||||||||||||