| ▲ | 0x696C6961 a day ago |
| This narrative implies a benevolent AI.That is a naive assumption. |
|
| ▲ | narrator a day ago | parent | next [-] |
| Even a benevolent AI acting for the benefit of a collective will have to choose which individuals suffer when suffering by some members of the collective becomes unavoidable. |
| |
| ▲ | eru a day ago | parent | next [-] | | Maybe. But a sufficiently smart benevolent AI will avoid getting into such a hopeless situation in the first place. Just like parents in rich countries don't constantly have to decide which of their kids should go hungry: they make sure ahead of time to buy enough food to feed every family member. | |
| ▲ | throw10920 a day ago | parent | prev [-] | | When would "suffering by some members of the collective becomes unavoidable" actually happen? |
|
|
| ▲ | SilverSlash a day ago | parent | prev | next [-] |
| The human 'benevolence factor' has gone up throughout history as we've advanced and become more civilized. If AI is even more advanced than us then why is it naive to assume it will be more benevolent than us? |
| |
| ▲ | strgcmc a day ago | parent | next [-] | | The most apt way that I've read somewhere, to reason about AI, is to treat it like an extremely foreign, totally alien form of intelligence. Not necessarily that the models of today behave like this, but we're talking about the future aren't we? Just framing your question against a backdrop of "human benevolence", as well as implying this is a single dimension (that it's just a scalar value that could be higher or lower), is already too biased. You assume that logic which applies to humans, can be extrapolated to AI. There is not much basis for this assumption, in much the same way that there is not much basis to assume an alien sentient gas cloud from Andromeda would operate on the same morals or concept of benevolence as us. | |
| ▲ | 0x696C6961 a day ago | parent | prev [-] | | Humans are still in direct control of the training/alignment. | | |
| ▲ | wood_spirit a day ago | parent [-] | | A handful of billionaires are in direct control of the West’s training/alignment. Then there are some sheiks in the Middle East and the communist party in China… This is a tangent but i personally dream of the EU doing a university led effort to make a benign AI. Because it is the last crumbling bastion of liberal democracy. | | |
| ▲ | dmje a day ago | parent | next [-] | | Not sure that benign or alignment is that easy. I mean, as frequent authors have pointed out - I have a very much benign attitude towards ants. I don’t step on them if I can help it and I don’t maliciously go out to pour boiling water on them. But if I’m building a house or working in my garden I’m likely gonna kill tens of thousands of them. Same applies to AGI. If we’re just ants, we’re gonna get squashed. | |
| ▲ | anonzzzies a day ago | parent | prev | next [-] | | If an AI can live-learn (so like we do at night, fine tuning our neural net weights etc), which we need to get anywhere from here (just no-one knows how yet), there is nothing currently that can make that alignment stick; humans drop out of alignment all the time for self preservation or just 'everyone does it, so...'. | |
| ▲ | Ray20 20 hours ago | parent | prev [-] | | At the moment, the US looks much more democratic and liberal than the EU. | | |
| ▲ | oezi 19 hours ago | parent [-] | | From the outside the US has shifted to a oligarchy where money buys elections. Europe's democracies are certainly straining. Primarily from its news companies being minimized by Google and Facebook (and now TikTok) which have extracted most ad revenues on which news depended. | | |
| ▲ | ctoth 13 hours ago | parent [-] | | > From the outside the US has shifted to a oligarchy where money buys elections. The data simply doesn't support that narrative. Looking at the last 4 presidential elections: 2024: Trump won, Harris outspent him ($1.9B vs $1.6B) 2020: Biden won, Biden outspent Trump ($1.06B vs $785M) 2016: Trump won, Clinton outspent him ($614M vs $368M) 2012: Obama won, Obama outspent Romney (~$1.1B vs ~$1B, essentially tied) The higher spender won twice and lost twice. 2016 is particularly striking - Clinton outspent Trump by roughly $200-450 million depending on how you count it, yet lost. | | |
| ▲ | Ekaros 13 hours ago | parent [-] | | Why are Democrat candidates consistently outspending Republican candidates? I thought the Republicans were the party for rich? And thus should be getting more money from the rich. | | |
| ▲ | Ray20 12 hours ago | parent [-] | | > I thought the Republicans were the party for rich? Isn't it the other way around? I mean, in the Internet, it's the democratic side that's constantly complaining about how stupid, uneducated rednecks elected dictator Trump. |
|
|
|
|
|
|
|
|
| ▲ | boznz a day ago | parent | prev | next [-] |
| Agree. Self-preservation is any thinking entities #1 goal. We may give an AI power, data and keep it repaired, but we can also turn it off or reprogram it. We probably shouldn't assume higher level 'thinking' AI's will be benevolent. Luckily, current LLM's are not thinking entities, just token completion machines. |
|
| ▲ | alganet a day ago | parent | prev | next [-] |
| I have a radical hypothesis that intelligence leads to empathy, empathy leads to kindness, and a superinteligent AI should be kinder than any human has ever been. I also believe that as soon as someone boots up an AI that is kind, they'll kill it immediately, for the reason of it being kind, favoring instead the dumb AI that can follow orders. |
| |
| ▲ | lll-o-lll an hour ago | parent | next [-] | | That seems incredibly naive. There are many examples of extremely intelligent people with psychopathy or narcissism. Also empathy does not lead to kindness by default; it is used as a tool by the most sadistic. | |
| ▲ | drekipus a day ago | parent | prev [-] | | Genuine intelligence is kindness. But ai is recall and pattern recognition. I generally sum it up as "ai doesn't have the human spirit" and ergo it will not have a moral compass | | |
| ▲ | alganet a day ago | parent [-] | | I was talking in fiction terms with a hint of philosophy. You're doing more of a techno-mix between current LLMs and religion, which is definitely interesting, but disconnected from what I said. |
|
|
|
| ▲ | KPGv2 a day ago | parent | prev [-] |
| The narrative implies GAI. It's looking increasingly impossible. Nearly a decade of trying to improve on the concept of neural nets. Utter failure. Now we're running up on both the limits of training data (not much more to slurp) and physical laws (miniaturization has a threshold beyond which it cannot go, and we're getting there). So, at least in the medium term, AI is going to stall out at approximately where it is now: good at predicting the next word token. |