▲ | SilverSlash a day ago | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
The human 'benevolence factor' has gone up throughout history as we've advanced and become more civilized. If AI is even more advanced than us then why is it naive to assume it will be more benevolent than us? | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
▲ | strgcmc a day ago | parent | next [-] | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
The most apt way that I've read somewhere, to reason about AI, is to treat it like an extremely foreign, totally alien form of intelligence. Not necessarily that the models of today behave like this, but we're talking about the future aren't we? Just framing your question against a backdrop of "human benevolence", as well as implying this is a single dimension (that it's just a scalar value that could be higher or lower), is already too biased. You assume that logic which applies to humans, can be extrapolated to AI. There is not much basis for this assumption, in much the same way that there is not much basis to assume an alien sentient gas cloud from Andromeda would operate on the same morals or concept of benevolence as us. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
▲ | 0x696C6961 a day ago | parent | prev [-] | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Humans are still in direct control of the training/alignment. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|